• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1781
  • 670
  • 320
  • 269
  • 233
  • 136
  • 46
  • 29
  • 27
  • 22
  • 18
  • 17
  • 17
  • 14
  • 13
  • Tagged with
  • 4462
  • 892
  • 592
  • 565
  • 560
  • 458
  • 446
  • 355
  • 348
  • 334
  • 333
  • 333
  • 332
  • 323
  • 294
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Nanowire Growth Process Modeling and Reliability Models for Nanodevices

Fathi Aghdam, Faranak January 2016 (has links)
Nowadays, nanotechnology is becoming an inescapable part of everyday life. The big barrier in front of its rapid growth is our incapability of producing nanoscale materials in a reliable and cost-effective way. In fact, the current yield of nano-devices is very low (around 10 %), which makes fabrications of nano-devices very expensive and uncertain. To overcome this challenge, the first and most important step is to investigate how to control nano-structure synthesis variations. The main directions of reliability research in nanotechnology can be classified either from a material perspective or from a device perspective. The first direction focuses on restructuring materials and/or optimizing process conditions at the nano-level (nanomaterials). The other direction is linked to nano-devices and includes the creation of nano-electronic and electro-mechanical systems at nano-level architectures by taking into account the reliability of future products. In this dissertation, we have investigated two topics on both nano-materials and nano-devices. In the first research work, we have studied the optimization of one of the most important nanowire growth processes using statistical methods. Research on nanowire growth with patterned arrays of catalyst has shown that the wire-to-wire spacing is an important factor affecting the quality of resulting nanowires. To improve the process yield and the length uniformity of fabricated nanowires, it is important to reduce the resource competition between nanowires during the growth process. We have proposed a physical-statistical nanowire-interaction model considering the shadowing effect and shared substrate diffusion area to determine the optimal pitch that would ensure the minimum competition between nanowires. A sigmoid function is used in the model, and the least squares estimation method is used to estimate the model parameters. The estimated model is then used to determine the optimal spatial arrangement of catalyst arrays. This work is an early attempt that uses a physical-statistical modeling approach to studying selective nanowire growth for the improvement of process yield. In the second research work, the reliability of nano-dielectrics is investigated. As electronic devices get smaller, reliability issues pose new challenges due to unknown underlying physics of failure (i.e., failure mechanisms and modes). This necessitates new reliability analysis approaches related to nano-scale devices. One of the most important nano-devices is the transistor that is subject to various failure mechanisms. Dielectric breakdown is known to be the most critical one and has become a major barrier for reliable circuit design in nano-scale. Due to the need for aggressive downscaling of transistors, dielectric films are being made extremely thin, and this has led to adopting high permittivity (k) dielectrics as an alternative to widely used SiO₂ in recent years. Since most time-dependent dielectric breakdown test data on bilayer stacks show significant deviations from a Weibull trend, we have proposed two new approaches to modeling the time to breakdown of bi-layer high-k dielectrics. In the first approach, we have used a marked space-time self-exciting point process to model the defect generation rate. A simulation algorithm is used to generate defects within the dielectric space, and an optimization algorithm is employed to minimize the Kullback-Leibler divergence between the empirical distribution obtained from the real data and the one based on the simulated data to find the best parameter values and to predict the total time to failure. The novelty of the presented approach lies in using a conditional intensity for trap generation in dielectric that is a function of time, space and size of the previous defects. In addition, in the second approach, a k-out-of-n system framework is proposed to estimate the total failure time after the generation of more than one soft breakdown.
292

Wireless Transducer Systems Architectures – A User’s Perspective

Blakely, Patrick A. 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper provides essential requirements and describes some possible architectures of so-called Wireless Transducers Systems from the user’s perspective and discusses the application advantages of each architecture, in the airplane-testing environment. The intent of this paper is to stimulate discussion in the transducer user and supplier communities and standards committees, leading to increased product suitability and lower cost for commercial off the shelf wireless transducer products.
293

IPCM Telemetry System: Experimental Results

Carvalho, Marco Aurélio 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / The aeronautical industries have been suffering financial cutbacks and the market has to face new challenges associated with new companies. Telemetry community has been facing the increase of the electromagnetic spectrum usage for a variety of applications (e.g. 4G), after all telemetry is everywhere. In view of these issues and focused on the inherent requirements of the Flight Test application, the IPEV R&D group proposes the iPCM Telemetry architecture as solution for the existing reliability and bandwidth issues associated with the telemetry link. In this article, as a proof-of-concept of the iPCM architecture, it has been performed an experimental assembly. The results demonstrate the iPCM's ability to regenerate corrupted data providing the required data integrity and reliability, besides the capability to dynamically select the FTI transmitted parameter list to optimize the bandwidth link.
294

A study of tool life and machinability parameters in high speed milling of hardened die steels

Niu, Caotan., 牛草坛. January 2007 (has links)
published_or_final_version / abstract / Mechanical Engineering / Master / Master of Philosophy
295

Impact of decentralized power on power systems

Morales, Ana A B 28 September 2006 (has links)
Wind generation is one of the most successful sources of renewable energy for the production of electrical energy. Wind power offers relatively high capacities, with generation costs that are becoming competitive with conventional energy sources. However, a major problem to its effective use as a power source is the fact that it is both intermittent and diffuse as wind speed is highly variable and site-specific. This is translated in large voltage and frequency excursions and dynamically unstable situations when fast wind power changes. Very high wind speeds will result in sudden loss of wind generator production. The requirement to ensure that sufficient spinning reserve capacity exists within the system to compensate for sudden loss of generation becomes crucial. From the utilities operators’point of view, the primary objective is the security of the system, followed by the quality of the supplied power. In order to guard the system security and quality of supply and retain acceptable levels, a maximum allowed wind power penetration (wind margin) is normally assumed by the operators. Very conservative methods are used to assess the impact of wind power and the consequences turn to under-exploitation of the wind power potential in a given region. This thesis presents the study of actual methods of wind power assessment, divided into three parts: 1. Part I: Impact on the Security of Power Systems 2. Part II: Impact on the Power Quality 3. Part III: Impact on the Dynamic Security of Power Systems
296

Optimization of the maintenance policy of reciprocating compressor based on the study of their performance degradation.

Vansnick, Michel P D G 21 December 2006 (has links)
Critical equipment plays an essential role in industry because of its lack of redundancy. Failure of critical equipment results in a major economic burden that will affect the profit of the enterprise. Lack of redundancy for critical equipment occurs because of the high cost of the equipment usually combined with its high reliability. When we are analyzing the reliability of such equipment, as a result, there are few opportunities to crash a few pieces of equipment to actually verify component life. Reliability is the probability that an item can perform its intended function for a specified interval of time under stated conditions and achieve low long-term cost of ownership for the system considering cost alternatives. From the economical standpoint, the overriding reliability issue is cost, particularly the cost of unreliability of existing equipment caused by failures. Classical questions about reliability are: · How long will the equipment function before failure occurs? · What are the chances that a failure will occur in a specified interval for turnaround? · What is the best turnaround interval? · What is the inherent reliability of the equipment? · What are the risks of delaying repair/replacements? · What is the cost of unreliability? · … We will try to answer these questions for a critical reciprocating compressor, which has been in service for only 4 years and has undergone only few failures. Professionals in all industries are faced with the problems of performing maintenance actions and optimizing maintenance planning for their repairable systems. Constructing stochastic models of their repairable systems and using these models to optimize maintenance strategies require a basic understanding of several key reliability and maintainability concepts and a mathematical modeling approach. Therefore, our objective is to present fundamental concepts and modeling approaches in the case of a critical reciprocating compressor. We developed a stochastic model not to simulate a reciprocating compressor with a complete set of components but mainly to optimize the overhaul period taking into account the main failure modes only. How to lower the cost? How to reduce or remove maintenance actions that are not strictly necessary? How to improve the long-term profitability of ageing plants with the strict respect of Health-Safety-Environment HSE requirements? A reciprocating compressor is a complex machine that cannot be described with a single reliability function. A compressor has several failure modes. Each failure mode is assumed to have its own Weibull cumulative distribution function. The compressor is then a system with several Weibull laws in series. We will extend the usual procedure for minimizing the expected total cost to a group of components. Different components may have different preventive maintenance “needs”, but optimizing preventive maintenance at the component level may be sub-optimal at the system level. We will study also the reliability importance indices that are valuable in establishing direction and prioritization of actions related to a reliability improvement plan, i.e. which component should be improved to increase the overall lifetime and thus reduce the system costs. When considering a large system with many items that are maintained or replaced preventively, it is advantageous to schedule the preventive maintenance in a block such that the system downtime is kept as small as possible. This requires that the resources are available so that the maintenance of components can be performed simultaneously or according to a well-defined sequence. The result of the stochastic model optimization came as a surprise. We thought to find a new mean-time-between-failure MTBF, larger than the actual overhaul period. Actually, the model showed that there is no economical interest to schedule a systematic preventive maintenance for this reciprocating compressor. Nevertheless, we cannot wait for a failure (and the associated corrective maintenance) because the loss-of-production cost is too high and this compressor has no spare. Preventive maintenance is not the optimum strategy, but predictive maintenance is. But what means predictive maintenance? It is a maintenance policy to regularly inspect equipment to detect incipient changes or deterioration in its mechanical or electrical condition and performance. The idea behind this is to perform corrective maintenance only when needed, before the occurrence of failure. We need to find how to detect performance deterioration of the compressor with a couple of weeks or days notice before failure. So it is possible to schedule a right maintenance activity at the optimum moment. To summarize, the main findings of this thesis are · a new method to estimate the shape factor of a Weibull distribution function, · a stochastic model demonstrating that we have to move from systematic preventive maintenance to predictive maintenance, · a low cost system based on thermodynamic approach to monitor a reciprocating compressor, · an automatic detection of performance deterioration.
297

Organizational factors in the reliability assessment of offshore systems

Biondi, Esteban L. 22 October 1998 (has links)
The reliability of ocean systems is dependent on organizational factors. It has been shown that low probability / high consequence system failures are overwhelmingly induced by organizational factors. However, no methodology is yet widely accepted for the evaluation of this phenomenon or its accurate quantification. A qualitative complementary approach is proposed based on the CANL (Complex Adaptive Non-Linear) model. In the first part, the understanding of organizational processes that affect reliability is sought. The approach is applied to several case studies based on published information: the "Story of a Platform Audit" (where no failure occurred) and some offshore accidents. A methodology is proposed to complement regular safety audit procedures. The approach is shown useful also to improve post-mortem investigations. In the second part, quantitative probabilistic formulations are revised, based on the understanding obtained through the previous approach. Some of the limitations of these quantitative methods are pointed out. The Reliability State of an Organization is defined and a ranking for its evaluation is proposed. Preliminary guidelines are presented for the use of this approach as a framework to identify suitable quantitative methods for a given case. The use of a qualitative approach is demonstrated. A different insight into organizational factors is achieved based on a disciplined approach that relies on experience. Significant conclusions regarding quantitative methods, their limitations and appropriate use, are obtained. / Graduation date: 1999
298

Parallel Paths of Equal Reliability Assessed using Multi-Criteria Selection for Identifying Priority Expendature

Hook, Tristan William January 2013 (has links)
This research project identifies some factors for the justification in having parallel network links of similar reliability. There are two key questions requiring consideration: 1) When is it optimal to have or create two parallel paths of equal or similar reliability? 2) How could a multi-criteria selection method be implemented for assigning expenditure? Asset and project management always have financial constraints and this requires a constant balancing of funds to priorities. Many methods are available to address these needs but two of the most common tools are risk assessment and economic evaluations. In principal both are well utilised and generally respected in the engineering community; when it compares parallel systems both tend to favour a single priority link, a single option. Practical conception also tends to support this concept as the expenditure strengthens one link well above the alternative. The example used to demonstrate the point that there is potential for parallel paths of equal or similar reliability is the Wellington link from near the airport (Troy Street) up the coast to Paekakariri. Both the local and highway options have various benefits of ease of travel to shopping facilities. Investigating this section provides several combinations from parallel highways to highway and local roads, so will have differing management criteria and associated land use. Generalised techniques are to be applied to the network. Risk is addressed as a reliability index figure that is preset to provide a consistent parameter (equal reliability) for each link investigated. Consequences are assessed with multi-criteria selection focusing on local benefits and shortcomings. Several models are used to build an understanding on how each consequence factor impacts on the overall model and to identify consequences of such a process. Economics are briefly discussed as the engineering community and funding is almost attributed to financial constraints. No specific analytical assessment has been completed. General results indicate there are supporting arguments to undertake a multi-selection criteria assessment while comparing parallel networks. Situations do occur when there is benefit for parallel networks of equal or similar reliability and therefore equal funding to both can be supported.
299

Fire Safety System Effectiveness for a Risk-Informed Design Tool

Frank, Kevin Michael January 2013 (has links)
The purpose of this research is to identify how uncertainty in fire safety system effectiveness should be considered in a new risk-informed design fire tool, B-RISK. Specific objectives were to collect the available data on fire safety system effectiveness from the literature, investigate methods to improve fire safety system effectiveness data collection, develop the risk-informed design fire tool to propagate the uncertainties, and recommend methods to rank the sources of uncertainty for fire safety system effectiveness for appropriate model selection. The scope of the research is limited to the effects of systems on fire development and smoke spread and does not include the effects of the fire on systems (such as loss of structural integrity) or interactions with occupants. Sprinkler effectiveness data from recent New Zealand Fire Service data is included with a discussion of the uncertainty in this type of data and recommendations for improving data collection. The ability of the model to predict multiple sprinkler activations is developed in conjunction with a hydraulic submodel in B-RISK to include water supply pressure effects on sprinkler effectiveness. A new method of collecting reliability data on passive fire protection elements such as doors was developed. Data collected on the probability for doors in shared means of escape to be open and the time doors are open during occupant evacuation using this method is presented. Available data on smoke management system effectiveness is listed, along with a discussion of why there is more uncertainty associated with these systems compared with sprinkler systems. The capabilities of B-RISK for considering fire safety system effectiveness are demonstrated using Australasian case studies.
300

A redundancy approach to sensor failure detection : with application to turbofan engines

Piercy, Neil Philip January 1988 (has links)
No description available.

Page generated in 0.0561 seconds