• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 567
  • 201
  • 89
  • 62
  • 22
  • 10
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 3
  • Tagged with
  • 1265
  • 225
  • 185
  • 180
  • 161
  • 122
  • 116
  • 106
  • 100
  • 97
  • 93
  • 91
  • 91
  • 89
  • 86
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

The design of a probabilistic engineering economic analysis package for a microcomputer

Puetz, Gilbert H. January 1985 (has links)
No description available.
82

A Comparison of Dynamic and Classical Event Tree Analysis for Nuclear Power Plant Probabilistic Safety/Risk Assessment

Metzroth, Kyle G. 22 July 2011 (has links)
No description available.
83

Risk-based design of structures for fire

Al-Remal, Ahmad Mejbas January 2013 (has links)
Techniques of performance-based design in fire safety have developed notably in the past two decades. One of the reasons for departing from the prescriptive methods is the ability of performance-based methods to form a scientific basis for the cost-risk-benefit analysis of different fire safety alternatives. Apart from few exceptions, observation of past fires has shown that the structure’s contribution to the overall fire resistance was considerably underestimated. The purpose of this research is to outline a risk-based design approach for structures in fire. Probabilistic methods are employed to ascertain uniform reliability indices in line with the classical trend in code development. Modern design codes for complex phenomena such as fire have been structured to facilitate design computations. Prescriptive design methods specify fire protection methods for structural systems based on laboratory controlled and highly restrictive testing regimes. Those methods inherently assume that the tested elements behave similarly in real structures irrespective of their loading, location or boundary conditions. This approach is contested by many researchers, and analyses following fire incidents indicated alarming discrepancy between anticipated and actual structural behaviour during real fires. In formulating design and construction codes, code writers deal with the inherent uncertainties by setting a ceiling to the potential risk of failure. The latter process is implemented by specifying safety parameters, that are derived via probabilistic techniques aimed at harmonising the risks ensuing different load scenarios. The code structure addresses the probability of failure with adequate detail and accuracy. The other component of the risk metric, namely the consequence of failure, is a subjective field that assumes a multitude of variables depending on the context of the problem. In codified structural design, the severity of failure is implicitly embodied in the different magnitudes of safety indices applied to different modes of structural response. This project introduces a risk-based method for the design of structures in fire. It provides a coherent approach to a quantified treatment of risk elements that meets the demands of performance-based fire safety methods. A number of proposals are made for rational acceptable risk and reliability parameters in addition to a damage index with applications in structural fire safety design. Although the example application of the proposed damage index is a structure subjected to fire effects, the same rationale can be easily applied to the assessment of structural damage due to other effects.
84

Service-Based Approach for Intelligent Agent Frameworks

Mora, Randall P., Hill, Jerry L. 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / This paper describes a service-based Intelligent Agent (IA) approach for machine learning and data mining of distributed heterogeneous data streams. We focus on an open architecture framework that enables the programmer/analyst to build an IA suite for mining, examining and evaluating heterogeneous data for semantic representations, while iteratively building the probabilistic model in real-time to improve predictability. The Framework facilitates model development and evaluation while delivering the capability to tune machine learning algorithms and models to deliver increasingly favorable scores prior to production deployment. The IA Framework focuses on open standard interoperability, simplifying integration into existing environments.
85

Enhanced probabilistic broadcasting scheme for routing in MANETs : an investigation in the design analysis and performance evaluation of an enhanced probabilistic broadcasting scheme for on-demand routing protocols in mobile ad-hoc networks

Hanashi, Abdalla Musbah Omar January 2009 (has links)
Broadcasting is an essential and effective data propagation mechanism with several important applications, such as route discovery, address resolution and many other network services. Though data broadcasting has many advantages, it can also cause a high degree of contention, collision and congestion, leading to what is known as 'broadcast storm problems'. Broadcasting has traditionally been based on the flooding protocol, which simply overflows the network with a high number of rebroadcast messages until these reach all the network nodes. A good probabilistic broadcast protocol can achieve high saved rebroadcast (SRB), low collision and a lower number of relays. When a node is in a sparse region of the network, rebroadcasting is relatively more important while the potential redundancy of rebroadcast is low because there are few neighbours which might rebroadcast the packet unnecessarily. Further, in such a situation, contention over the wireless medium resulting from Redundant broadcasts is not as serious as in scenarios with medium or high density node populations. This research proposes a dynamic probabilistic approach that dynamically fine-tunes the rebroadcast probability according to the number of neighbouring nodes distributed in the ad-hoc network for routing request packets (RREQs) without requiring the assistance of distance measurements or location-determination devices. The main goal of this approach is to reduce the number of rebroadcast packets and collisions in the network. The performance of the proposed approach is investigated and compared with simple AODV, fixed-probabilistic and adjusted-probabilistic flooding [1] schemes using the GloMoSim network simulator and a number of important MANET parameters, including node speed, traffic load and node density under a Random Waypoint (RWP) mobility model. Performance results reveal that the proposed approach is able to achieve higher SRB and less collision as well as a lower number of relays than fixed probabilistic, simple AODV and adjusted-probabilistic flooding. In this research, extensive simulation experiments have been conducted in order to study and analyse the proposed dynamic probabilistic approach under different mobility models. The mobility model is designed to describe the movement pattern of mobile customers, and how their position, velocity and acceleration change over time. In this study, a new enhanced dynamic probabilistic flooding scheme is presented. The rebroadcast probability p will be calculated dynamically and the rebroadcasting decision will be based on the average number of nodes in the ad-hoc networks. The performance of the new enhanced algorithm is evaluated and compared to the simple AODV, fixed-probabilistic, adjusted-probabilistic and dynamic-probabilistic flooding schemes. It is demonstrated that the new algorithm has superior performance characteristics in terms of collision, relays and SRB. Finally, the proposed schemes are tested and evaluated through a set of experiments under different mobility models to demonstrate the relative merits and capabilities of these schemes.
86

Using mortars to upscale permeability in heterogeneous porous media from the pore to continuum scale

Bhagmane, Jaideep Shivaprasad 20 September 2010 (has links)
Pore-scale network modeling has become an effective method for accurate prediction and upscaling of macroscopic properties, such as permeability. Networks are either mapped directly from real media or stochastic methods are used that simulate their heterogeneous pore structure. Flow is then modeled by enforcing conservation of mass in each pore and approximations to the momentum equations are solved in the connecting throats. In many cases network modeling compares favorably to experimental measurements of permeability. However, computational and imaging restrictions generally limit the network size to the order of 1 mm3 (few thousand pores). For extremely heterogeneous media these models are not large enough in capturing the petrophysical properties of the entire heterogeneous media and inaccurate results can be obtained when upscaling to the continuum scale. Moreover, the boundary conditions imposed are artificial; a pressure gradient is imposed in one dimension so the influence of flow behavior in the surrounding media is not included. In this work we upscale permeability in large, heterogeneous media using physically-representative pore-scale network models (domain ~106 pores). High-performance computing is used to obtain accurate results in these models, but a more efficient, novel domain decomposition method is introduced for upscaling the permeability of pore-scale models. The medium is decomposed into hundreds of smaller networks (sub-domains) and then coupled with the surrounding models to determine accurate boundary conditions. Finite element mortars are used as a mathematical tool to ensure interfacial pressures and fluxes are matched at the interfaces of the networks boundaries. The results compare favorably to the more computationally intensive (and impractical) approach of upscaling the media as a single model. Moreover, the results are much more accurate than traditional hierarchal upscaling methods. This upscaling technique has important implications for using pore-scale models directly in reservoir simulators in a multiscale setting. The upscaling techniques introduced here on single phase flow can also be easily extended to other flow phenomena, such as multiphase and non-Newtonian behavior. / text
87

Near-Fault Forward-Directivity Aspects of Strong Ground Motions in the 2010-11 Canterbury Earthquakes

Joshi, Varun Anil January 2013 (has links)
The purpose of this thesis is to conduct a detailed examination of the forward-directivity characteristics of near-fault ground motions produced in the 2010-11 Canterbury earthquakes, including evaluating the efficacy of several existing empirical models which form the basis of frameworks for considering directivity in seismic hazard assessment. A wavelet-based pulse classification algorithm developed by Baker (2007) is firstly used to identify and characterise ground motions which demonstrate evidence of forward-directivity effects from significant events in the Canterbury earthquake sequence. The algorithm fails to classify a large number of ground motions which clearly exhibit an early-arriving directivity pulse due to: (i) incorrect pulse extraction resulting from the presence of pulse-like features caused by other physical phenomena; and (ii) inadequacy of the pulse indicator score used to carry out binary pulse-like/non-pulse-like classification. An alternative ‘manual’ approach is proposed to ensure 'correct' pulse extraction and the classification process is also guided by examination of the horizontal velocity trajectory plots and source-to-site geometry. Based on the above analysis, 59 pulse-like ground motions are identified from the Canterbury earthquakes , which in the author's opinion, are caused by forward-directivity effects. The pulses are also characterised in terms of their period and amplitude. A revised version of the B07 algorithm developed by Shahi (2013) is also subsequently utilised but without observing any notable improvement in the pulse classification results. A series of three chapters are dedicated to assess the predictive capabilities of empirical models to predict the: (i) probability of pulse occurrence; (ii) response spectrum amplification caused by the directivity pulse; (iii) period and amplitude (peak ground velocity, PGV) of the directivity pulse using observations from four significant events in the Canterbury earthquakes. Based on the results of logistic regression analysis, it is found that the pulse probability model of Shahi (2013) provides the most improved predictions in comparison to its predecessors. Pulse probability contour maps are developed to scrutinise observations of pulses/non-pulses with predicted probabilities. A direct comparison of the observed and predicted directivity amplification of acceleration response spectra reveals the inadequacy of broadband directivity models, which form the basis of the near-fault factor in the New Zealand loadings standard, NZS1170.5:2004. In contrast, a recently developed narrowband model by Shahi & Baker (2011) provides significantly improved predictions by amplifying the response spectra within a small range of periods. The significant positive bias demonstrated by the residuals associated with all models at longer vibration periods (in the Mw7.1 Darfield and Mw6.2 Christchurch earthquakes) is likely due to the influence of basin-induced surface waves and non-linear soil response. Empirical models for the pulse period notably under-predict observations from the Darfield and Christchurch earthquakes, inferred as being a result of both the effect of nonlinear site response and influence of the Canterbury basin. In contrast, observed pulse periods from the smaller magnitude June (Mw6.0) and December (Mw5.9) 2011 earthquakes are in good agreement with predictions. Models for the pulse amplitude generally provide accurate estimates of the observations at source-to-site distances between 1 km and 10 km. At longer distances, observed PGVs are significantly under-predicted due to their slower apparent attenuation. Mixed-effects regression is employed to develop revised models for both parameters using the latest NGA-West2 pulse-like ground motion database. A pulse period relationship which accounts for the effect of faulting mechanism using rake angle as a continuous predictor variable is developed. The use of a larger database in model development, however does not result in improved predictions of pulse period for the Darfield and Christchurch earthquakes. In contrast, the revised model for PGV provides a more appropriate attenuation of the pulse amplitude with distance, and does not exhibit the bias associated with previous models. Finally, the effects of near-fault directivity are explicitly included in NZ-specific probabilistic seismic hazard analysis (PSHA) using the narrowband directivity model of Shahi & Baker (2011). Seismic hazard analyses are conducted with and without considering directivity for typical sites in Christchurch and Otira. The inadequacy of the near-fault factor in the NZS1170.5: 2004 is apparent based on a comparison with the directivity amplification obtained from PSHA.
88

Seismic performance of precast concrete cladding systems.

Baird, Andrew January 2014 (has links)
Structural engineering is facing an extraordinarily challenging era. These challenges are driven by the increasing expectations of modern society to provide low-cost, architecturally appealing structures which can withstand large earthquakes. However, being able to avoid collapse in a large earthquake is no longer enough. A building must now be able to withstand a major seismic event with negligible damage so that it is immediately occupiable following such an event. As recent earthquakes have shown, the economic consequences of not achieving this level of performance are not acceptable. Technological solutions for low-damage structural systems are emerging. However, the goal of developing a low-damage building requires improving the performance of both the structural skeleton and the non-structural components. These non-structural components include items such as the claddings, partitions, ceilings and contents. Previous research has shown that damage to such items contributes a disproportionate amount to the overall economic losses in an earthquake. One such non-structural element that has a history of poor performance is the external cladding system, and this forms the focus of this research. Cladding systems are invariably complicated and provide a number of architectural functions. Therefore, it is important than when seeking to improve their seismic performance that these functions are not neglected. The seismic vulnerability of cladding systems are determined in this research through a desktop background study, literature review, and postearthquake reconnaissance survey of their performance in the 2010 – 2011 Canterbury earthquake sequence. This study identified that precast concrete claddings present a significant life-safety risk to pedestrians, and that the effect they have upon the primary structure is not well understood. The main objective of this research is consequently to better understand the performance of precast concrete cladding systems in earthquakes. This is achieved through an experimental campaign and numerical modelling of a range of precast concrete cladding systems. The experimental campaign consists of uni-directional, quasi static cyclic earthquake simulation on a test frame which represents a single-storey, single-bay portion of a reinforced concrete building. The test frame is clad with various precast concrete cladding panel configurations. A major focus is placed upon the influence the connection between the cladding panel and structural frame has upon seismic performance. A combination of experimental component testing, finite element modelling and analytical derivation is used to develop cladding models of the cladding systems investigated. The cyclic responses of the models are compared with the experimental data to evaluate their accuracy and validity. The comparison shows that the cladding models developed provide an excellent representation of real-world cladding behaviour. The cladding models are subsequently applied to a ten-storey case-study building. The expected seismic performance is examined with and without the cladding taken into consideration. The numerical analyses of the case-study building include modal analyses, nonlinear adaptive pushover analyses, and non-linear dynamic seismic response (time history) analyses to different levels of seismic hazard. The clad frame models are compared to the bare frame model to investigate the effect the cladding has upon the structural behaviour. Both the structural performance and cladding performance are also assessed using qualitative damage states. The results show a poor performance of precast concrete cladding systems is expected when traditional connection typologies are used. This result confirms the misalignment of structural and cladding damage observed in recent earthquake events. Consequently, this research explores the potential of an innovative cladding connection. The outcomes from this research shows that the innovative cladding connection proposed here is able to achieve low-damage performance whilst also being cost comparable to a traditional cladding connection. It is also theoretically possible that the connection can provide a positive value to the seismic performance of the structure by adding addition strength, stiffness and damping. Finally, the losses associated with both the traditional and innovative cladding systems are compared in terms of tangible outcomes, namely: repair costs, repair time and casualties. The results confirm that the use of innovative cladding technology can substantially reduce the overall losses that result from cladding damage.
89

Probabilistic Control: Implications For The Development Of Upper Limb Neuroprosthetics

Anderson, Chad January 2007 (has links)
Functional electrical stimulation (FES) involves artificial activation of paralyzed muscles via implanted electrodes. FES has been successfully used to improve the ability of tetraplegics to perform upper limb movements important for daily activities. The variety of movements that can be generated by FES is, however, limited to a few movements such as hand grasp and release. Ideally, a user of an FES system would have effortless command over all of the degrees of freedom associated with upper limb movement. One reason that a broader range of movements has not been implemented is because of the substantial challenge associated with identifying the patterns of muscle stimulation needed to elicit additional movements. The first part of this dissertation addresses this challenge by using a probabilistic algorithm to estimate the patterns of muscle activity associated with a wide range of upper limb movements.A neuroprosthetic involves the control of an external device via brain activity. Neuroprosthetics have been successfully used to improve the ability of tetraplegics to perform tasks important for interfacing with the world around them. The variety of mechanisms which they can control is, however, limited to a few devices such as special computer typing programs. Because motor areas of the cerebral cortex are known to represent and regulate voluntary arm movements it might be possible to sense this activity with electrodes and decipher this information in terms of a moment-by-moment representation of arm trajectory. Indeed, several methods for decoding neural activity have been described, but these approaches are encumbered by technical difficulties. The second part of this dissertation addresses this challenge by using similar probabilistic methods to extract arm trajectory information from electroencephalography (EEG) electrodes that are already chronically deployed and widely used in human subjects.Ultimately, the two approaches developed as part of this dissertation might serve as a flexible controller for interfacing brain activity with functional electrical stimulation systems to realize a brain-controlled upper-limb neuroprosthetic system capable of eliciting natural movements. Such a system would effectively bypass the injured region of the spinal cord and reanimate the arm, greatly increasing movement capability and independence in paralyzed individuals.
90

Probabilistic modeling of microgrinding wheel topography

Kunz, Jacob Andrew 20 September 2013 (has links)
This work addresses the advanced probabilistic modeling of the stochastic nature of microgrinding in the machining of high-aspect ratio, ceramic micro-features. The heightened sensitivity of such high-fidelity workpieces to excessive grit cutting force drives a need for improved stochastic modeling. Statistical propagation is used to generate a comprehensive analytic probabilistic model for static wheel topography. Numerical simulation and measurement of microgrinding wheels show the model accurately predicts the stochastic nature of the topography when exact wheel specifications are known. Investigation into the statistical scale affects associated microgrinding wheels shows that the decreasing number of abrasives in the wheel increases the relative statistical variability in the wheel topography although variability in the wheel concentration number dominates the source of variance. An in situ microgrinding wheel measurement technique is developed to aid in the calibration of the process model to improve on the inaccuracy caused by wheel specification error. A probabilistic model is generated for straight traverse and infeed microgrinding dynamic wheel topography. Infeed microgrinding was shown to provide a method of measuring individual grit cutting forces with constant undeformed chip thickness within the grind zone. Measurements of the dynamic wheel topography in infeed microgrinding verified the accuracy of the probabilistic model.

Page generated in 0.08 seconds