Spelling suggestions: "subject:"failure prediction"" "subject:"ailure prediction""
1 |
Delamination in composite laminates with curvature and discontinuous pliesPetrossian, Zackarias January 1998 (has links)
No description available.
|
2 |
An examination of the stability of forecasting in failure prediction modelsLin, Lee-Hsuan January 1992 (has links)
No description available.
|
3 |
Failure criterion for masonry arch bridgesWang, Xin Jun January 1993 (has links)
No description available.
|
4 |
STOPPING LAUNCH PAD DELAYS, LAUNCH FAILURES, SATELLITE INFANT MORTALITIES AND ON ORBIT SATELLITE FAILURES USING TELEMETRY PROGNOSTIC TECHNOLOGYLosik, Len 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Telemetry Prognostics is Failure Prediction using telemetry for launch vehicle and satellite space flight equipment to stop launch failures, launch pad delays, satellite infant mortalities and satellite on orbit failures. This technology characterizes telemetry behaviors that are latent, transient, and go undetected by the most experienced engineering personnel and software diagnostic tools during integration and test, launch operations and on orbit activities stopping launch pad delays, launch failures, infant mortalities and on orbit failures. Telemetry prognostics yield a technology with state-of-the-art innovative techniques for determining critical on-board equipment remaining useful life taking into account system states, attitude reorientations, equipment usage patterns, failure modes and piece part failure characteristics to increase the reliability, usability, serviceability, availability and safety of our nation’s space systems.
|
5 |
FAILURE PREDICTION AND STRESS ANALYSIS OF MICROCUTTING TOOLSChittipolu, Sujeev 2009 May 1900 (has links)
Miniaturized devices are the key producing next-generation microelectro-mechanical products. The applications extend to many fields that demand high-level tolerances from microproducts and component functional and structural integrity. Silicon-based products are limited because silicon is brittle. Products can be made from other engineering materials and need to be machined in microscale.
This research deals with predicting microtool failure by studying spindle runout and tool deflection effects on the tool, and by measuring the cutting force that would fail the tool during microend-milling. End-milling was performed using a tungsten carbide (Ø1.016 mm dia., 2 flute) tool on SS-316L material.
Tool runout measured using a laser was found to be less than 1 µm and tool deflection at 25000 rpm was 20 µm. Finite element analysis (FEA) predicts tool failure due to static bending for a deflection greater than 99% of tool diameter. Threshold values of chipload and cutting force resulting in tool failure were found using workdone by tool. Threshold values to predict tool failure were suggested for axial depth of cut in between 17.25% - 34.5% of cutter length. For a chipload greater than 20% of cutter diameter, the microtool fails instantly for any radial depth of cut.
|
6 |
Prediction-based failure management for supercomputersGe, Wuxiang January 2011 (has links)
The growing requirements of a diversity of applications necessitate the deployment of large and powerful computing systems and failures in these systems may cause severe damage in every aspect from loss of human lives to world economy. However, current fault tolerance techniques cannot meet the increasing requirements for reliability. Thus new solutions are urgently needed and research on proactive schemes is one of the directions that may offer better efficiency. This thesis proposes a novel proactive failure management framework. Its goal is to reduce the failure penalties and improve fault tolerance efficiency in supercomputers when running complex applications. The proposed proactive scheme builds on two core components: failure prediction and proactive failure recovery. More specifically, the failure prediction component is based on the assessment of system events and employs semi-Markov models to capture the dependencies between failures and other events for the forecasting of forthcoming failures. Furthermore, a two-level failure prediction strategy is described that not only estimates the future failure occurrence but also identifies the specific failure categories. Based on the accurate failure forecasting, a prediction-based coordinated checkpoint mechanism is designed to construct extra checkpoints just before each predicted failure occurrence so that the wasted computational time can be significantly reduced. Moreover, a theoretical model has been developed to assess the proactive scheme that enables calculation of the overall wasted computational time.The prediction component has been applied to industrial data from the IBM BlueGene/L system. Results of the failure prediction component show a great improvement of the prediction accuracy in comparison with three other well-known prediction approaches, and also demonstrate that the semi-Markov based predictor, which has achieved the precision of 87.41% and the recall of 77.95%, performs better than other predictors.
|
7 |
Towards Interpretable Vision SystemsZhang, Peng 06 December 2017 (has links)
Artificial intelligent (AI) systems today are booming and they are used to solve new tasks or improve the performance on existing ones. However, most AI systems work in a black-box fashion, which prevents the users from accessing the inner modules. This leads to two major problems: (i) users have no idea when the underlying system will fail and thus it could fail abruptly without any warning or explanation, and (ii) users' lack of proficiency about the system could fail pushing the AI progress to its state-of-the-art. In this work, we address these problems in the following directions. First, we develop a failure prediction system, acting as an input filter. It raises a flag when the system is likely to fail with the given input. Second, we develop a portfolio computer vision system. It is able to predict which of the candidate computer vision systems perform the best on the input. Both systems have the benefit of only looking at the inputs without running the underlying vision systems. Besides, they are applicable to any vision system. By equipped such systems on different applications, we confirm the improved performance. Finally, instead of identifying errors, we develop more interpretable AI systems, which reveal the inner modules directly. We take two tasks as examples, words semantic matching and Visual Question Answering (VQA). In VQA, we take binary questions on abstract scenes as the first stage, then we extend to all question types on real images. In both cases, we take attention as an important intermediate output. By explicitly forcing the systems to attend correct regions, we ensure the correctness in the systems. We build a neural network to directly learn the semantic matching, instead of using the relation similarity between words. Across all the above directions, we show that by diagnosing errors and making more interpretable systems, we are able to improve the performance in the current models. / Ph. D. / Researchers have made rapid progresses in artificial intelligence (AI). For example, AI systems were able to reach new state-of-the-art performance on object detection task in computer vision; AI systems were able to play games themselves, such as Alpha GO, which was never happened before. However, most of the AI systems work in a black-box fashion, which prevents users from accessing the inner modules. This could result in two problems. On one hand, users do not know when the underlying systems will fail. For example, in object detection task, users have no idea when the system could not recognize a cat in a cat image or when the system will recognize a dog as a cat. On the other hand, users have no access on how the system work, so it is hard for them to find the bottle neck and improve the overall performance. In this work, we tackle the above problems in two broad directions: diagnosing the errors and making interpretable systems. The first one can be addressed in two ways: identifying the erroneous inputs and identifying the erroneous systems. Thus, we build a failure prediction system and a portfolio computer vision system, respectively. Failure prediction system could raise a warning when the input is not reliable, while the portfolio system could pick predicted best-performing approach from candidates. Finally, we focus on developing more interpretable AI systems, which reveal the inner modules directly. We take two tasks as examples, words semantic matching and Visual Question Answering (VQA). VQA system produces an answer upon given image and question. We take attention as the important intermediate output, which mimics how humans solve this task. In semantic matching, we build a system to learn the semantic matching between words, instead of using the relation similarity between them. In both directions, we show the improved performance in a variety of applications.
|
8 |
AI/ML Development for RAN Applications : Deep Learning in Log Event Prediction / AI/ML-utveckling för RAN-applikationer : Deep Learning i Log Event PredictionSun, Yuxin January 2023 (has links)
Since many log tracing application and diagnostic commands are now available on nodes at base station, event log can easily be collected, parsed and structured for network performance analysis. In order to improve In Service Performance of customer network, a sequential machine learning model can be trained, test, and deployed on each node to learn from the past events to predict future crashes or a failure. This thesis project focuses on the evaluation and analysis of the effectiveness of deep learning models in predicting log events. It explores the application of stacked long short-term memory(LSTM) based model in capturing temporal dependencies and patterns within log event data. In addition, it investigates the probability distribution of the next event from the logs and estimates event trigger time to predict the future node restart event. This thesis project aims to improve the node availability time in base station of Ericsson and contribute to further application in log event prediction using deep learning techniques. A framework with two main phases is utilized to analyze and predict the occurrence of restart events based on the sequence of events. In the first phase, we perform natural language processing(NLP) on the log content to obtain the log key, and then identify the sequence that will cause the restart event from the sequence node events. In the second phase, we analyze these sequence of events which resulted in restart, and predict how many minutes in the future the restart event will occur. Experiment results show that our framework achieves no less than 73% accuracy on restart prediction and more than 1.5 minutes lead time on restart. Moreover, our framework also performs well for non-restart events. / Eftersom många loggspårningsapplikationer och diagnostiska kommandon nu finns tillgängliga på noder vid basstationen, kan händelseloggar enkelt samlas in, analyseras och struktureras för analys av nätverksprestanda. För att förbättra kundnätverkets In Service Performance kan en sekventiell maskininlärningsmodell tränas, testas och distribueras på varje nod för att lära av tidigare händelser för att förutsäga framtida krascher eller ett fel. Detta examensarbete fokuserar på utvärdering och analys av effektiviteten hos modeller för djupinlärning för att förutsäga logghändelser. Den utforskar tillämpningen av staplade långtidsminne (LSTM)-baserad modell för att fånga tidsmässiga beroenden och mönster i logghändelsedata. Dessutom undersöker den sannolikhetsfördelningen för nästa händelse från loggarna och uppskattar händelseutlösningstiden för att förutsäga den framtida omstartshändelsen för noden. Detta examensarbete syftar till att förbättra nodtillgänglighetstiden i Ericssons basstation och bidra till ytterligare tillämpning inom logghändelseprediktion med hjälp av djupinlärningstekniker. Ett ramverk med två huvudfaser används för att analysera och förutsäga förekomsten av omstartshändelser baserat på händelseförloppet. I den första fasen utför vi naturlig språkbehandling (NLP) på logginnehållet för att erhålla loggnyckeln och identifierar sedan sekvensen som kommer att orsaka omstartshändelsen från sekvensnodhändelserna. I den andra fasen analyserar vi dessa händelseförlopp som resulterade i omstart och förutsäger hur många minuter i framtiden omstartshändelsen kommer att inträffa. Experimentresultat visar att vårt ramverk uppnår inte mindre än 73% noggrannhet vid omstartsförutsägelse och mer än 1,5 minuters ledtid vid omstart. Dessutom fungerar vårt ramverk bra för händelser som inte startar om.
|
9 |
Predicting Failures and Estimating Duration of Remaining Service Life from Satellite TelemetryLosik, Len, Wahl, Sheila, Owen, Lewis 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / This paper addresses research completed for predicting hardware failures and estimating remaining service life for satellite components using a Failure Prediction Process (FPP). It is a joint paper, presenting initial research completed at the University of California, Berkeley, Center for Extreme Ultraviolet (EUV) Astrophysics using telemetry from the EUV EXPLORER (EUVE) satellite and statistical computation analysis completed by Lockheed Martin. This work was used in identifying suspect "failure precursors." Lockheed Martin completed an exploration into the application of statistical pattern recognition methods to identify FPP events observed visually by the human expert. Both visual and statistical methods were successful in detecting suspect failure precursors. An estimate for remaining service life for each unit was made from the time the suspect failure precursor was identified. It was compared with the actual time the equipment remained operable. The long-term objective of this research is to develop a resident software module which can provide information on FPP events automatically, economically, and with high reliability for long-term management of spacecraft, aircraft, and ground equipment. Based on the detection of a Failure Prediction Process event, an estimate of remaining service life for the unit can be calculated and used as a basis to manage the failure.
|
10 |
A Case for Waste Fraud and Abuse: Stopping the Air Force from Purchasing Spacecraft That Fail PrematurelyLosik, Len 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / Spacecraft and launch vehicle reliability is dominated by premature equipment failures and surprise equipment failures that increase risk and decrease safety, mission assurance and effectiveness. Large, complex aerospace systems such as aircraft, launch vehicle and satellites are first subjected to most exhaustive and comprehensive acceptance testing program used in any industry and yet suffer from the highest premature failure rates. Desired/required spacecraft equipment performance is confirmed during factory testing using telemetry, however equipment mission life requirement is not measured but calculated manually and so the equipment that will fail prematurely are not identified and replaced before use. Spacecraft equipment mission-life is not measured and confirmed before launch as performance is but calculated using stochastic equations from probability reliability analysis engineering standards such as MIL STD 217. The change in the engineering practices used to manufacture and test spacecraft necessary to identify the equipment that will fail prematurely include using a prognostic and health management (PHM) program. A PHM includes using predictive algorithms to convert equipment telemetry into a measurement of equipment remaining usable life. A PHM makes the generation, collection, storage and engineering and scientific analysis of equipment performance data "mission critical" rather than just nice-to-have engineering information.
|
Page generated in 0.1056 seconds