• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1772
  • 669
  • 320
  • 269
  • 233
  • 136
  • 46
  • 29
  • 27
  • 22
  • 18
  • 17
  • 17
  • 14
  • 13
  • Tagged with
  • 4446
  • 889
  • 591
  • 565
  • 559
  • 457
  • 444
  • 353
  • 347
  • 334
  • 333
  • 333
  • 332
  • 323
  • 293
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Experiments and Capability Analysis in Process Industry / Experiment och duglighetanalys i processindustrin

Lundkvist, Peder January 2012 (has links)
The existence of variation has been a major problem in industry since the industrial revolution. Hence, many organizations try to find strategies to master and reduce the variation. Statistical analysis, such as process capability analysis and Design of Experiments (DoE), often plays an important role in such a strategy. Process capability analysis can determine how the process performs relative to its requirements or specifications, where an important part is the use of process capability indices. DoE includes powerful methods, such as factorial designs, which helps experimenters to maximize the information output from conducted experiments and minimize the experimental work required to reach statistically significant results.Continuous processes, frequently found in the process industry, highlight special issues that are typically not addressed in the DoE literature, for example, autocorrelation and dynamics. The overall purpose of this research is to contribute to an increased knowledge of analyzing DoE and capability in process industry, which is achieved through simulations and case studies of real industrial processes. This research focus on developing analysis procedures adapted for experiments and comparing decision methods for capability analysis in process industry.The results of this research are presented in three appended papers. Paper A shows how the use of a two-level factorial experiment can be used to identifying factors that affect the depth and variation of the oscillation mark that arises from the steel casting process. Four factors were studied; stroke length of the mold, oscillation frequency, motion pattern of the mold (sinus factor), and casting speed. The ANOVA analysis turned out to be problematic because of a non- orthogonal experimental design due to loss of experimental runs. Nevertheless, no earlier studies where found that shows how the sinus factor is changed in combination with the oscillation frequency so that the interaction effect could be studied. Paper B develops a method to analyze factorial experiments, affected by process interruptions and loss of experimental runs, by using time series analysis. Paper C compares four different methods for capability analysis, when data are autocorrelated, through simulations and case study of a real industrial process. In summary, it is hard to recommend one single method that works well in all situations. However, two methods appeared to be better than the others. Keywords: Process industry, Continuous processes, Autocorrelation, Design of Experiments, Process capability, Time series analysis. / Förekomsten av variation i tillverkningsprocesser har varit ett problem redan sedan den industriella revolutionen. Därför har många organisationer försökt hitta en strategi för att hantera och reducera variationen. Statistiska metoder som duglighetsanalys och försöksplanering spelar ofta en viktig roll i dessa sammanhang. Duglighetsanalys bedömer hur processen presterar i relation till dess krav eller specifikationer, där en viktig del är användningen av duglighetsindex. Försöksplanering omfattar kraftfulla metoder, exempelvis faktorförsök, för att hjälpa den som utför experiment att maximera informationsutbytet vid experiment och samtidigt minimera de resurser som krävs för att nå statistiskt säkerställda resultat.Kontinuerliga processer, vilka är frekvent förekommande i processindustrin, ger upphov till speciella problem vid experiment som normalt inte behandlas i litteraturen, exempelvis autokorrelation och dynamik. Det övergripande syftet med forskningen i denna avhandling är att bidra till en ökad kunskap om analysen av försöksplanering och duglighet i process industri, vilket uppnås genom simuleringar och fallstudier av verkliga industriella processer.Denna forskning fokuserar på att föreslå och utveckla analysmetoder anpassade för experiment samt att jämföra olika beslutsmetoder för duglighetsanalys i industriella processer.Resultaten av forskningen presenteras i tre bifogade artiklar. Artikel A visar hur ett två-nivåers faktorförsök kan användas för att identifiera de faktorer som påverkar oscillationsmärkesdjupet som uppstår från stålstränggjutnings¬processen. Fyra faktorer studerades; slaglängden av gjutform, svängnings¬frekvensen, rörelsemönstret av gjutform (sinusfaktor) och gjuthastigheten. ANOVA analys visade sig vara problematiskt eftersom försöksdesignen inte var ortogonala på grund av förlorade försöksomgångar. Trots det har inga tidigare studier hittats som visar hur sinusfaktorn ändras i kombination med svängnings¬frekvensen så att samspelseffekten kan studeras. Artikel B utvecklar en metod för att analysera faktorförsök, påverkat av processavbrott och förlust av experimentomgångar, baserat på tidsserieanalys. Artikel C jämför fyra olika metoder för duglighetsanalys, när data är autokorrelerad, genom simuleringar och fallstudie av en faktisk industriell process. Sammanfattningsvis är det svårt att rekommendera en metod som fungerar bra i alla situationer. Resultaten pekar på att två metoder är bättre än de andra.
162

Optimering av rotordrift för värmeväxlare / Optimering av rotordrift för värmeväxlare

Holländer, Anton January 2023 (has links)
No description available.
163

Processförbättring vid tillverkning av konstruktionskeramer

Garvare, Rickard January 1998 (has links)
This thesis is about implementing Design of Experiments in enterprises manufacturing high performance ceramics. The manufacturing of ceramics is a complex process which involves problems with variation in product properties and in process performance. Every system in operation generates information that can be used to improve it. To be able to improve, measurements must be made and recorded data must be transformed into information. Design of Experiments is about performing tests using a minimum of resources to receive a maximum of information about a process or a system. Today most of the development of processes and products is done supported by expensive, and often misleading, one-factor-at-a-time experiments. To examine the possibilities of facilitating implementation of Design of Experiments, case-studies of two Swedish manufacturers of high performance ceramics were carried out. A model of implementing Design of Experiments is presented based on theory and the case-studies. The proposed model consists of three major phases: 1.Planning and education. 2.Pilot project with new ways of working. 3.Assessment, maintenance and improvement. Design of Experiments appears to be a well suited technique for structuring the development of manufacturing high performance ceramics. The implementation of Design of Experiments could be facilitated by long-term planning for process improvement. To make assessment and evaluation possible, process performance should be documented not only after but also before an implementation takes place. Both knowledge about statistics and knowledge about the studied processes should be present in the teams carrying out experiments. / <p>Godkänd; 1998; 20070404 (ysko)</p>
164

Squat Detection in Railway Switches &amp; Crossings Using Point Machine Vibration

Zuo, Yang January 2022 (has links)
Railway switches and crossings (S&amp;Cs) are among the most important high-value components in a railway network and a single failure of such an asset could result in severe network disturbance, huge economical loss, and even severe accidents. Therefore, potential defects need to be detected at an early stage and the status of the S&amp;C must be monitored to prevent such consequences. One type of defect that can occur is called a squat. A squat is a local defect like a dent or an open pit in the rail surface. In this thesis, a testbed including a full-scale S&amp;C and a bogie wagon was studied. Vibrations were measured for different squat sizes by an accelerometer mounted at the point machine, while a boggy was travelling along the S&amp;C. A method of processing the vibration data and the speed data is proposed to investigate the feasibility of detecting and quantifying the severity of a squat. A group of features were extracted to apply isolation forest to generate anomaly scores to estimate the health status of the S&amp;C. One key technology applied is wavelet denoising. The study shows that it is possible to monitor the development of the squat size introduced in the test bed by measuring point machine vibrations. The relationships between the normalised peak-to-peak amplitude of the vibration signal and the squat depth were estimated. The results also show that the proposed method is effective and can produce anomaly scores that can indicate the general health status of an S&amp;C regarding squat defects.
165

An accurate reliability modeling technique for hybrid modular redundant digital systems using modular characteristic error parameters /

Nelson, Victor Peter January 1978 (has links)
No description available.
166

Measures of agreement for qualitative data

Wolfson, Christina, 1955- January 1978 (has links)
No description available.
167

Dynamic Fault Tree Analysis: State-of-the-Art in Modeling, Analysis, and Tools

Aslansefat, K., Kabir, Sohag, Gheraibia, Y., Papadopoulos, Y. 04 August 2020 (has links)
Yes / Safety and reliability are two important aspects of dependability that are needed to be rigorously evaluated throughout the development life-cycle of a system. Over the years, several methodologies have been developed for the analysis of failure behavior of systems. Fault tree analysis (FTA) is one of the well-established and widely used methods for safety and reliability engineering of systems. Fault tree, in its classical static form, is inadequate for modeling dynamic interactions between components and is unable to include temporal and statistical dependencies in the model. Several attempts have been made to alleviate the aforementioned limitations of static fault trees (SFT). Dynamic fault trees (DFT) were introduced to enhance the modeling power of its static counterpart. In DFT, the expressiveness of fault tree was improved by introducing new dynamic gates. While the introduction of the dynamic gates helps to overcome many limitations of SFT and allows to analyze a wide range of complex systems, it brings some overhead with it. One such overhead is that the existing combinatorial approaches used for qualitative and quantitative analysis of SFTs are no longer applicable to DFTs. This leads to several successful attempts for developing new approaches for DFT analysis. The methodologies used so far for DFT analysis include, but not limited to, algebraic solution, Markov models, Petri Nets, Bayesian Networks, and Monte Carlo simulation. To illustrate the usefulness of modeling capability of DFTs, many benchmark studies have been performed in different industries. Moreover, software tools are developed to aid in the DFT analysis process. Firstly, in this chapter, we provided a brief description of the DFT methodology. Secondly, this chapter reviews a number of prominent DFT analysis techniques such as Markov chains, Petri Nets, Bayesian networks, algebraic approach; and provides insight into their working mechanism, applicability, strengths, and challenges. These reviewed techniques covered both qualitative and quantitative analysis of DFTs. Thirdly, we discussed the emerging trends in machine learning based approaches to DFT analysis. Fourthly, the research performed for sensitivity analysis in DFTs has been reviewed. Finally, we provided some potential future research directions for DFT-based safety and reliability analysis.
168

An Investigation of Software Metrics Affect on Cobol Program Reliability

Day, Henry Jesse II 20 June 1996 (has links)
The purpose of this research was to predict a COBOL program's reliability from software characteristics that are found in the program's source code. The first step was to select factors based on the human information processing model that are associated with changes in computer program reliability. Then these factors (software metrics) were quantitatively studied to determine which factors affect COBOL program reliability. Then a statistical model was developed that predicts COBOL program reliability. Reliability was selected because the reliability of computer programs can be used by systems professionals and auditors to make decisions. Using the Human Information Processing Model to study the act of creating a computer program, several hypotheses were derived about program characteristics and reliability. These hypotheses were categorized as size, structure, and temporal hypotheses. These characteristics were then used to test several prediction models for the reliability of COBOL programs. Program characteristics were measured by a program called METRICS. METRICS was written by the author using the Pascal programming language. It accepts COBOL programs as input and produces as output seventeen measures of complexity. Actual programs and related data were then gathered from a large insurance company over the course of one year. The data were used to test the hypotheses and to find a model for predicting the reliability of COBOL programs. The operational definition for reliability was the probability of a program executing without abending. The size of a program, its cyclomatic complexity, and the number of times a program has been executed were used to predict reliability. A regression model was developed that predicted the reliability of a COBOL program from a program's characteristics. The model had a prediction error of 9.3%, a R2 of 15%, and an adjusted R2 of 13%. The most important thing learned from the research is that increasing the size of a program's modules, not the total size of a program, is associated with decreased reliability. / Ph. D.
169

Reliability Transform Method

Young, Robert Benjamin 22 July 2003 (has links)
Since the end of the cold war the United States is the single dominant naval power in the world. The emphasis of the last decade has been to reduce cost while maintaining this status. As the Navy's infrastructure decreases, so too does its ability to be an active participant in all aspects of ship operations and design. One way that the navy has achieved large savings is by using the Military Sealift Command to manage day to day operations of the Navy's auxiliary and underway replenishment ships. While these ships are an active part of the Navy's fighting force, they infrequently are put into harm's way. The natural progression in the design of these ships is to have them fully classified under current American Bureau of Shipping (ABS) rules, as they closely resemble commercial ships. The first new design to be fully classed under ABS is the T-AKE. The Navy and ABS consider the T-AKE program a trial to determine if a partnership between the two organizations can extend into the classification of all new naval ships. A major difficulty in this venture is how to translate the knowledge base which led to the development of current military specifications into rules that ABS can use for future ships. The specific task required by the Navy in this project is to predict the inherent availability of the new T-AKE class ship. To accomplish this task, the reliability of T-AKE equipment and machinery must be known. Under normal conditions reliability data would be obtained from past ships with similar mission, equipment and machinery. Due to the unique nature of the T-AKE acquisition, this is not possible. Because of the use of commercial off the shelf (COTS) equipment and machinery, military equipment and machinery reliability data can not be used directly to predict T-AKE availability. This problem is compounded by the fact that existing COTS equipment and machinery reliability data developed in commercial applications may not be applicable to a military application. A method for deriving reliability data for commercial equipment and machinery adapted or used in military applications is required. A Reliability Transform Method is developed that allows the interpolation of reliability data between commercial equipment and machinery operating in a commercial environment, commercial equipment and machinery operating in a military environment, and military equipment and machinery operating in a military environment. The reliability data for T-AKE is created using this Reliability Transform Method and the commercial reliability data. The reliability data is then used to calculate the inherent availability of T-AKE. / Master of Science
170

Quantifying validity and reliability of GPS derived distances during simulated tennis movements

Tessaro, Edoardo 09 February 2017 (has links)
Tennis is a competitive sport attracting millions of players and fans worldwide. During a competition, the physical component crucially affects the final result of a match. In field sports such as soccer physical demand data are collected using the global positioning system (GPS). There is question regarding the validity and reliability of using GPS technology for court sports such as tennis. The purpose of this study is to determine the validity and reliability of GPS to determine distances covered during simulated tennis movements. This was done by comparing GPS recorded distances to distances determined with a calibrated trundle wheel. Two SPI HPU units were attached to the wheel Four different trials were performed to assess accuracy and reliability: distance trial (DIST), shuttle run trial (SHUT), change of direction trial (COD) and random movement trial (RAND). The latter three trials are performed on a tennis court and designed to mimic movements during a tennis match. Bland-Altman analysis showed that during all trails, there were small differences in the trundle wheel and GPS derived distances. Bias for the DIST, SHUT, COD and RAND trails were -0.02±0.10, -0.51±0.15, -0.24±0.19 and 0.28±0.20%, respectively. Root mean squared (RMS) errors for the four trials were 0.41±0.10, 1.28±0.10, 1.70±0.10 and 1.55±0.13%. Analysis of paired units showed a good reliability with mean bias and RMS errors <2%%. These results suggest that SPI HPU units are both accurate and reliable for simulated tennis movements. They can be confidently used to determine the physical demands of court sports like tennis. / Master of Science

Page generated in 0.05 seconds