Spelling suggestions: "subject:"eliability."" "subject:"deliability.""
891 |
児童用社会的責任目標尺度の信頼性,妥当性の検討中谷, 素之, Nakaya, Motoyuki 26 December 1997 (has links)
国立情報学研究所で電子化したコンテンツを使用している。
|
892 |
Adaptive Reliability Analysis of Excavation ProblemsPark, Jun Kyung 2011 August 1900 (has links)
Excavation activities like open cutting and tunneling work may cause ground movements. Many of these activities are performed in urban areas where many structures and facilities already exist. These activities are close enough to affect adjacent structures. It is therefore important to understand how the ground movements due to excavations influence nearby structures.
The goal of the proposed research is to investigate and develop analytical methods for addressing uncertainty during observation-based, adaptive design of deep excavation and tunneling projects. Computational procedures based on a Bayesian probabilistic framework are developed for comparative analysis between observed and predicted soil and structure response during construction phases. This analysis couples the adaptive design capabilities of the observational method with updated reliability indices, to be used in risk-based design decisions.
A probabilistic framework is developed to predict three-dimensional deformation profiles due to supported excavations using a semi-empirical approach. The key advantage of this approach for practicing engineers is that an already common semi-empirical chart can be used together with a few additional simple calculations to better evaluate three-dimensional displacement profiles. A reliability analysis framework is also developed to assess the fragility of excavation-induced infrastructure system damage for multiple serviceability limit states.
Finally, a reliability analysis of a shallow circular tunnel driven by a pressurized shield in a frictional and cohesive soil is developed to consider the inherent uncertainty in the input parameters and the proposed model. The ultimate limit state for the face stability is considered in the analysis. The probability of failure that exceeding a specified applied pressure at the tunnel face is estimated. Sensitivity and importance measures are computed to identify the key parameters and random variables in the model.
|
893 |
Unified Reliability Index Development for Utility Quality AssessmentSindi, Hatem 04 January 2013 (has links)
With the great potential smart distribution systems have to cause a paradigm shift in conventional distribution systems, many areas need investigation. Throughout the past few decades, many distribution systems reliability indices have been developed. Varying in their calculation techniques, burden, and purpose of calculation, these indices covered wide range of reliability issues that face both utilities and regulators. The major purpose of the continuous development of reliability indices is to capture a comprehensive idea of systems performance. While systems are evolving to a much more smarter and robust ones, so do the assessment tools need to be improved. The lack of consensus among utilities and regulators on which indices should be used complicate the problem more. Furthermore, regulators still come short when it comes to standard implementation because no final standard have been developed. However, regulators tend to advice or impose certain numbers on utilities based on historic performances. Because of the inevitable comparisons made by regulators on the routinely practiced process of utilities’ reporting of some of their indices, adequate and fair process needs to be implemented. The variation in utilities perspective on the advice or imposed indices cause an additional burden to achieving fair and adequate designs, upgrade requirements, and public goodwill. Some utilities consider these regulators recommendations guidelines; others treat them as strict standards, and yet others consider them goals. In this work, a development of a unified reliability index, which can yield proper performance assessment, fair comparisons, and reflection of all the knowledge imbedded within all current indices, will be developed. The developed unified index provides several benefits, among which is adequate standards design, improved tools for planning and design optimization, and less technical burden on operators. In addition, the development of a unified reliability index required the development of a standard normalization methodology.
|
894 |
Modelling inground decay of wood poles for optimal maintenance decisionsRahmin, Anisur January 2003 (has links)
Wood poles are popular and widely used in the Power Supply Industries in all over the world because of their high strength per unit weight, low installation and maintenance costs and excellent durability. Reliability of these components depends on a complex combination of age, usage, component durability, inspection, maintenance actions and environmental factors influencing decay and failure of components. Breakdown or failure of any one or more of these components can lead to outage and cause a huge loss to any organisation. Therefore, it is extremely important to predict the next failure to prevent it or reduce its effect by appropriate maintenance and contingency plans. In Australia, more than 5.3 million wooden poles are in use. This represents an investment of around AU$ 12 billion with a replacement cost varying between AU$1500-2500 per pole. Well-planned inspection and maintenance strategies considering the effect of environmental and human factors can extend the reliability and safety of these components. Maintenance and sophisticated inspection is worthwhile if the additional costs are less than the savings from the reduced cost of failures. Objectives of this research are to: * Investigate decay patterns of timber components based on age and environmental factors (e.g. clay composition) for power supply wood pole in the Queensland region. * Develop models for optimizing inspection schedules and Maintenance plans. Deterioration of wood poles in Queensland is found mostly due to inground soil condition. It is found that the moisture content, pH value (Acidity/ alkalinity), bulk density, salinity and electrical conductivity have influence over the deterioration process. Presence of Kaolin or Quartz has some indirect effect on the degradation process. It allows more water to be trapped inside the soil that cause algae, moss and mould to grow and attack the wood poles. On the other hand, by virtue of permeability, soils with high quartz content allows more water to infiltrate, preventing the growth of micro-organism. This research has increased fundamental understanding of inground wood decay process, developed testing methods for soil factors and proposed integrated models for performance improvement through optimal inspection, repair and replacement strategies considering durability, environmental and human factors in maintenance decisions. A computer program is also developed to analyse "what if" scenario for managerial decisions. This research has enhanced knowledge on the wood decay process in diverse environmental conditions. The outcomes of this research are important, not only to users of timber components with ingrond decay but also to the wood industry in general (the housing sector, railways for wooden sleepers and other structural applications such as timber bridges). Three refereed conference papers have already come out of this research and two more papers for refereed journal publication are in the process. This research can be extended to develop models for: * Qualitative as well as quantitative research database on lab/field wood decay process; * Assessment of the residual life of timber infrastructure; * Optimal condition monitoring and maintenance plans for timber components showing inground decay; And * Cost effective decisions for prevention of timber components and mitigation. Findings of this research can be applied to other equipment or assets showing time dependent failure rate and can be extended further to consider age/usage replacement policies, downtime and liability costs.
|
895 |
Data mining for degradation modelling /Lin, Hungyen. Unknown Date (has links)
Accelerated degradation testing is widely accepted in competitive industries. As there is no longer the need to test till failures, there are tremendous cost and time benefits on fully capitalizing on such a testing regime. Consequently, this research has aimed for better understanding of the relationship between design and degradation using the degradation data. Existing work in the literature uses the degradation data to improve the reliability of products. The majority of techniques, however, are centred on statistical experimental methods. For problems with increasing complexities such as large multivariable data set, non-linear interactions and dynamic varying processes, conventional methods cannot resolve the problem efficiently. Furthermore, it can not provide the adequate modelling mechanism to learn from the degradation data autonomously for describing the relationship between the design parameters and degradation. Artificial neural network is widely used for complex problems in the literature. This thesis proposes and demonstrates the neural network modelling methodology into capturing the non parametric relationship between design and degradation. / The development of a neural network consists of data preparation, network design and training and testing. This thesis presents a comprehensive description on the data generation and acquisition process. More specifically, the physical tests, experimental designs, equipment configurations, data acquisitions systems and algorithms are elaborated. Single hidden layer multilayered perceptrons are found to be the most suitable network architectures for the problem domains. Detailed descriptions of the training and testing process in determining the suitable number of hidden neurons sufficient for the problem are provided. / In summary, the neural network modelling methodology is demonstrated for the particular problem domain. As a result of the work in this thesis, two models of different practical significance are developed and compiled as Windows executables for predicting material performances. / Thesis (MEng(ManufacturingEngineering)--University of South Australia, 2006.
|
896 |
Test and evaluation master plan for "ACTE LIS" /Choudhary, Surendra Singh. Unknown Date (has links)
Thesis (MEng in Test and Evaluation)--University of South Australia, 1995
|
897 |
Architectural support for security and reliability in embedded processorsRagel, Roshan Gabriel, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Security and reliability in processor based systems are concerns requiring adroit solutions. Security is often compromised by code injection attacks, jeopardizing even ???trusted software???. Reliability is of concern, where unintended code is executed in modern processors with ever smaller feature sizes and low voltage swings causing bit flips. Countermeasures by software-only approaches increase code size and therefore significantly reduce performance. Hardware assisted approaches use additional hardware monitors and thus incur considerably high hardware cost and have scalability problems. Considering reliability and security issues during the design of an embedded system has its advantages as this overcomes the limitations of existing solutions. The research work presented in this thesis combines two elements: one, defining a hardware software design framework for reliability and security monitoring at the granularity of micro-instructions, and two, applying this framework for real world problems. At a given time, a processor executes only a few instructions and large part of the processor is idle. Utilizing these idling hardware components by sharing them with the monitoring hardware, to perform security and reliability monitoring reduces the impact of the monitors on hardware cost. Using micro-instruction routines within the machine instructions, allows us to share most of the monitoring hardware. Therefore, our technique requires little hardware overhead in comparison to having additional hardware blocks outside the processor. This reduction in overhead is due to maximal sharing of hardware resources of the processor. Our framework is superior to software-only techniques as the monitoring routines are formed with micro-instructions and therefore reduces code size and execution time overheads, since they occur in parallel with machine instructions. This dissertation makes four significant contributions to the field of security and reliability on embedded processor research and they are: (i) proposed a security and reliability framework for embedded processors that could be included into its design phase; (ii) shown that inline (machine instruction level) monitoring will detect common security attacks (four inline monitors against common attacks cost 9.21% area and 0.67% performance, as opposed to previous work where an external monitor with two monitoring modules costs 15% area overhead); (iii) illustrated that basic block check-summing for code integrity is much simpler and efficient than currently proposed integrity violation detectors which address code injection attacks (this costs 5.03% area increase and 3.67% performance penalty with a single level control flow checking, as opposed to previous work where the area overhead is 5.59%, which needed three control flow levels of integrity checking); and (iv) shown that hardware assisted control flow checking implemented during the design of a processor is much cheaper and effective than software only approaches (this approach costs 0.24-1.47% performance and 3.59% area overheads, as opposed to previous work that costs 53.5-99.5% performance).
|
898 |
Data reliability control in wireless sensor networks for data streaming applicationsLe, Dinh Tuan, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
This thesis contributes toward the design of a reliable and energy-efficient transport system for Wireless Sensor Networks. Wireless Sensor Networks have emerged as a vital new area in networking research. In many Wireless Sensor Network systems, a common task of sensor nodes is to sense the environment and send the sensed data to a sink node. Thus, the effectiveness of a Wireless Sensor Network depends on how reliably the sensor nodes can deliver their sensed data to the sink. However, the sensor nodes are susceptible to loss for various reasons when there are dynamics in wireless transmission medium, environmental interference, battery depletion, or accidentally damage, etc. Therefore, assuring reliable data delivery between the sensor nodes and the sink in Wireless Sensor Networks is a challenging task. The primary contributions of this thesis include four parts. First, we design, implement, and evaluate a cross-layer communication protocol for reliable data transfer for data streaming applications in Wireless Sensor Networks. We employ reliable algorithms in each layer of the communication stack. At the MAC layer, a CSMA MAC protocol with an explicit hop-by-hop Acknowledgment loss recovery is employed. To ensure the end-to-end reliability, the maximum number of retransmissions are estimated and used at each sensor node. At the transport layer, an end-to-end Negative Acknowledgment with an aggregated positive Acknowledgment mechanism is used. By inspecting the sequence numbers on the packets, the sink can detect which packets were lost. In addition, to increase the robustness of the system, a watchdog process is implemented at both base station and sensor nodes, which enable them to power cycle when an unexpected fault occurs. We present extensive evaluations, including theoretical analysis, simulations, and experiments in the field based on Fleck-3 platform and the TinyOS operating system. The designed network system has been working in the field for over a year. The results show that our system is a promising solution to a sustainable irrigation system. Second, we present the design of a policy-based Sensor Reliability Management framework for Wireless Sensor Networks called SRM. SRM is based on hierarchical management architecture and on the policy-based network management paradigm. SRM allows the network administrators to interact with the Wireless Sensor Network via the management policies. SRM also provides a self-control capability to the network. This thesis restricts SRM to reliability management, but the same framework is also applicable for other management services by providing the management policies. Our experimental results show that SRM can offer sufficient reliability to the application users while reducing energy consumption by more than 50% compared to other approaches. Third, we propose an Energy-efficient and Reliable Transport Protocol called ERTP, which is designed for data streaming applications in Wireless Sensor Networks. ERTP is an adaptive transport protocol based on statistical reliability that ensures the number of data packets delivered to the sink exceeds the defined threshold while reducing the energy consumption. Using a statistical reliability metric when designing a reliable transport protocol guarantees the delivery of adequate information to the users, and reduces energy consumption when compared to the absolute reliability. ERTP uses hop-by-hop Implicit Acknowledgment with a dynamically updated retransmission timeout for packet loss recovery. In multihop wireless networks, the transmitter can overhear a forwarding transmission and interpret it as an Implicit Acknowledgment. By combining the statistical reliability and the hop-by-hop Implicit Acknowledgment loss recovery, ERTP can offer sufficient reliability to the application users with minimal energy expense. Our extensive simulations and experimental evaluations show that ERTP can reduce energy consumption by more than 45% when compared to the state-of- the-art protocol. Consequently, sensor nodes are more energy-efficient and the lifespan of the unattended Wireless Sensor Network is increased. In Wireless Sensor Networks, sensor node failures can create network partitions or coverage loss which can not be solved by providing reliability at higher layers of the protocol stack. In the final part of this thesis, we investigate the problem of maintaining the network connectivity and coverage when the sensor nodes are failed. We consider a hybrid Wireless Sensor Network where a subset of the nodes has the ability to move at a high energy expense. When a node has low remaining energy (dying node) but it is a critical node which constitutes the network such as a cluster head, it will seek a replacement. If a redundant node is located in the transmission range of the dying node and can fulfill the network connectivity and coverage requirement, it can be used for substitution. Otherwise, a protocol should be in place to relocate the redundant sensor node for replacement. We propose a distributed protocol for Mobile Sensor Relocation problem called Moser. Moser works in three phases. In the first phase, the dying node determines if network partition occurs, finds an available mobile node, and asks for replacement by using flooding algorithm. The dying node also decides the movement schedule of the available mobile node based on certain criteria. The second phase of the Moser protocol involves the actual movement of the mobile nodes to approach the location of the dying node. Finally, when the mobile node has reached the transmission of the dying node, it communicates to the dying nodes and moves to a desired location, where the network connectivity and coverage to the neighbors of the dying nodes are preserved.
|
899 |
On the construction of reliable device driversRyzhyk, Leonid, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
This dissertation is dedicated to the problem of device driver reliability. Software defects in device drivers constitute the biggest source of failure in operating systems, causing significant damage through downtime and data loss. Previous research on driver reliability has concentrated on detecting and mitigating defects in existing drivers using static analysis or runtime isolation. In contrast, this dissertation presents an approach to reducing the number of defects through an improved device driver architecture and development process. In analysing factors that contribute to driver complexity and induce errors, I show that a large proportion of errors are due to two key shortcomings in the device-driver architecture enforced by current operating systems: poorly-defined communication protocols between drivers and the operating system, which confuse developers and lead to protocol violations, and a multithreaded model of computation, which leads to numerous race conditions and deadlocks. To address the first shortcoming, I propose to describe driver protocols using a formal, state-machine based, language, which avoids confusion and ambiguity and helps driver writers implement correct behaviour. The second issue is addressed by abandoning multithreading in drivers in favour of a more disciplined event-driven model of computation, which eliminates most concurrency-related faults. These improvements reduce the number of defects without radically changing the way drivers are developed. In order to further reduce the impact of human error on driver reliability, I propose to automate the driver development process by synthesising the implementation of a driver from the combination of three formal specifications: a device-class specification that describes common properties of a class of similar devices, a device specification that describes a concrete representative of the class, and an operating system interface specification that describes the communication protocol between the driver and the operating system. This approach allows those with the most appropriate skills and knowledge to develop specifications: device specifications are developed by device manufacturers, operating system specifications by the operating system designers. The device-class specification is the only one that requires understanding of both hardware and software-related issues. However writing such a specification is a one-off task that only needs to be completed once for a class of devices. This approach also facilitates the reuse of specifications: a single operating-system specification can be combined with many device specifications to synthesise drivers for multiple devices. Likewise, since device specifications are independent of any operating system, drivers for different systems can be synthesised from a single device specification. As a result, the likelihood of errors due to incorrect specifications is reduced because these specifications are shared by many drivers. I demonstrate that the proposed techniques can be incorporated into existing operating systems without sacrificing performance or functionality by presenting their implementation in Linux. This implementation allows drivers developed using these techniques to coexist with conventional Linux drivers, providing a gradual migration path to more reliable drivers.
|
900 |
Integrated Global Positioning System and inertial navigation system integrity monitor performanceHarris, William M. January 2003 (has links)
Thesis (M.S.)--Ohio University, August, 2003. / Title from PDF t.p. Includes bibliographical references (leaves 33-34).
|
Page generated in 0.0672 seconds