• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 11
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estimating the expected latency to failure due to manufacturing defects

Dorsey, David Michael 30 September 2004 (has links)
Manufacturers of digital circuits test their products to find defective parts so they are not sold to customers. Despite extensive testing, some of their products that are defective pass the testing process. To combat this problem, manufacturers have developed a metric called defective part level. This metric measures the percentage of parts that passed the testing that are actually defective. While this is useful for the manufacturer, the customer would like to know how long it will take for a manufacturing defect to affect circuit operation. In order for a defect to be detected during circuit operation, it must be excited and observed at the same time. This research shows the correlation between defect detection during automatic test pattern generation (ATPG) testing and normal operation for both combinational and sequential circuits. This information is then used to formulate a mathematical model to predict the expected latency to failure due to manufacturing defects.
2

Estimating the expected latency to failure due to manufacturing defects

Dorsey, David Michael 30 September 2004 (has links)
Manufacturers of digital circuits test their products to find defective parts so they are not sold to customers. Despite extensive testing, some of their products that are defective pass the testing process. To combat this problem, manufacturers have developed a metric called defective part level. This metric measures the percentage of parts that passed the testing that are actually defective. While this is useful for the manufacturer, the customer would like to know how long it will take for a manufacturing defect to affect circuit operation. In order for a defect to be detected during circuit operation, it must be excited and observed at the same time. This research shows the correlation between defect detection during automatic test pattern generation (ATPG) testing and normal operation for both combinational and sequential circuits. This information is then used to formulate a mathematical model to predict the expected latency to failure due to manufacturing defects.
3

Modeling defective part level due to static and dynamic defects based upon site observation and excitation balance

Dworak, Jennifer Lynn 30 September 2004 (has links)
Manufacture testing of digital integrated circuits is essential for high quality. However, exhaustive testing is impractical, and only a small subset of all possible test patterns (or test pattern pairs) may be applied. Thus, it is crucial to choose a subset that detects a high percentage of the defective parts and produces a low defective part level. Historically, test pattern generation has often been seen as a deterministic endeavor. Test sets are generated to deterministically ensure that a large percentage of the targeted faults are detected. However, many real defects do not behave like these faults, and a test set that detects them all may still miss many defects. Unfortunately, modeling all possible defects as faults is impractical. Thus, it is important to fortuitously detect unmodeled defects using high quality test sets. To maximize fortuitous detection, we do not assume a high correlation between faults and actual defects. Instead, we look at the common requirements for all defect detection. We deterministically maximize the observations of the leastobserved sites while randomly exciting the defects that may be present. The resulting decrease in defective part level is estimated using the MPGD model. This dissertation describes the MPGD defective part level model and shows how it can be used to predict defective part levels resulting from static defect detection. Unlike many other predictors, its predictions are a function of site observations, not fault coverage, and thus it is generally more accurate at high fault coverages. Furthermore, its components model the physical realities of site observation and defect excitation, and thus it can be used to give insight into better test generation strategies. Next, we investigate the effect of additional constraints on the fortuitous detection of defects-specifically, as we focus on detecting dynamic defects instead of static ones. We show that the quality of the randomness of excitation becomes increasingly important as defect complexity increases. We introduce a new metric, called excitation balance, to estimate the quality of the excitation, and we show how excitation balance relates to the constant τ in the MPGD model.
4

Modeling defective part level due to static and dynamic defects based upon site observation and excitation balance

Dworak, Jennifer Lynn 30 September 2004 (has links)
Manufacture testing of digital integrated circuits is essential for high quality. However, exhaustive testing is impractical, and only a small subset of all possible test patterns (or test pattern pairs) may be applied. Thus, it is crucial to choose a subset that detects a high percentage of the defective parts and produces a low defective part level. Historically, test pattern generation has often been seen as a deterministic endeavor. Test sets are generated to deterministically ensure that a large percentage of the targeted faults are detected. However, many real defects do not behave like these faults, and a test set that detects them all may still miss many defects. Unfortunately, modeling all possible defects as faults is impractical. Thus, it is important to fortuitously detect unmodeled defects using high quality test sets. To maximize fortuitous detection, we do not assume a high correlation between faults and actual defects. Instead, we look at the common requirements for all defect detection. We deterministically maximize the observations of the leastobserved sites while randomly exciting the defects that may be present. The resulting decrease in defective part level is estimated using the MPGD model. This dissertation describes the MPGD defective part level model and shows how it can be used to predict defective part levels resulting from static defect detection. Unlike many other predictors, its predictions are a function of site observations, not fault coverage, and thus it is generally more accurate at high fault coverages. Furthermore, its components model the physical realities of site observation and defect excitation, and thus it can be used to give insight into better test generation strategies. Next, we investigate the effect of additional constraints on the fortuitous detection of defects-specifically, as we focus on detecting dynamic defects instead of static ones. We show that the quality of the randomness of excitation becomes increasingly important as defect complexity increases. We introduce a new metric, called excitation balance, to estimate the quality of the excitation, and we show how excitation balance relates to the constant τ in the MPGD model.
5

Flygplansavisningens miljöpåverkan vid svenska flygplatser / The environmental Impact of Aircraft De-icing at Swedish Airports

Marklund, Lars January 2004 (has links)
<p>The aim of this thesis was to answer a number of questions about the environmental consequences of aircraft de-icing. A further aim was to suggest how the environmental consequences due to the release of de-icing fluids can be measured and reduced.</p><p>The main impact of the aircraft de-icing on the environment is due to the large oxygen demand for the degradation of glycol based de-icing fluids which are released in the environment. The effect of the increase in oxygen demand depends on where the degradation occurs in the ecosystem. In a sensitive ecosystem, the large demand of oxygen could provide an anaerobic environment which would be harmful for many types of organisms.</p><p>In order to reduce the negative effects of the applied de-icing fluid, there is some type of collection system at every regular airport in Sweden. The methods of collection can be divided into two general groups, hydrological isolation or vacuum sweeper trucks. When the area used for hydrological isolation is relatively small it is called a central de-icing pad. This thesis investigates which methods are being used at 16 of the Swedish airports with the most intense de-icing activity. Of all of these airports, only one does not use vacuum sweeper trucks. Six of the airports use central de-icing pads and five use hydrological isolation of a larger area. The investigation of the efficiency of each method showed no significant differences. This is due to the lack of accurate measurements and that different measurement methods are employed at different airports.</p><p>This thesis also examines which methods for measuring the efficiency are being used, their weaknesses and what alternatives methods are available. Suggestions are also given to minimize the environmental consequences of aircraft de-icing, taken into account both leakage of the de-icing fluid and its judicious use.</p><p>The case study of Stockholm-Bromma Airport includes a more detailed investigation of the de-icing activities and a rough mass balance is established. The aim of establisheing the mass balance is to determine the extent of collection of the de-icing fluids, their runoff to the storm water system or arrival at a diffuse sink. The results show that even if the collection is low, only a small part of the de-icing fluids reaches the storm water system. A relatively large part goes to the diffuse sinks where the de-icing fluids degrade on the soil surface or percolate into the soil. In the case study there is also an investigation of the probable impacts on the surrounding environment due to aircraft de-icing at Stockholm-Bromma Airport and suggestions are made how to reduce the impact.</p> / <p>Det primära syftet med examensarbetet har varit att besvara ett antal frågeställningar om flygplansavisningens miljöpåverkan. Frågeställningar som behandlar hur flygplansavisningens miljöpåverkan kan uppskattas/mätas samt reduceras.</p><p>Den miljöpåverkan flygplansavisningen ger upphov till består främst av den syreförbrukning som orsakas vid nedbrytningen av den använda, glykolbaserade avisningsvätskan. Hur stor denna miljöbelastning blir beror till stor del av var nedbrytningen äger rum. I känsliga ekosystem ger syreförbrukningen upphov till syrefattiga miljöer vilket många organismer missgynnas av.</p><p>För att reducera de negativa effekter som använd avisningsvätska kan ge upphov till sker på alla svenska reguljära flygplatser ett tillvaratagande av den använda avisningsvätskan. Uppsamlingsmetoderna som används kan uppdelas i två huvudprinciper. Den ena är att området där avisning sker begränsas hydrologiskt och den andra metoden är uppsamling med ”vakuum-sugbil”. Den hydrologiska avgränsningen kan göras för ett mindre område och benämns då som en stationär avisningsyta. Examensarbetet utreder vilka uppsamlingsmetoder för avisningsvätska som används på 16 av de svenska flygplatserna med mest avisningsaktivitet. Vid de undersökta flygplatserna används ”vakuum-sugbil” på alla utom en. På sex av flygplatserna används stationära avisningsplatser och större avgränsade områden för uppsamling finns vid fem flygplatser. Ingen signifikant skillnad i effektivitet (uppsamlingsgrad) kunde påvisas mellan de olika uppsamlingsmetoderna. Anledningen till detta kan ligga i att mätmetoderna skiljer sig mellan de olika flygplatserna och att det är mycket svårt att mäta eller uppskatta effektiviteten.</p><p>Examensarbetet utreder även vilka mätmetoder som används, deras svagheter samt alternativ till dessa. Förslag ges även till allmänna åtgärder för att minimera flygplansavisningens miljöpåverkan ur såväl utsläppssynpunkt som för resursåtgång.</p><p>En fallstudie av Stockholm-Bromma flygplats ger en betydligt djupare utredning än för övriga flygplatser och en grov massbalans upprättas för den använda glykolen. Massbalansen upprättas för man ska kunna avgöra hur stora mängder av den använda avisningsvätskan som samlas upp, lämnar flygplatsen via dagvattennätet eller når diffusa sänkor. Resultaten visar att även om uppsamlingsgraden är liten är belastningen på dagvattennätet ringa. En stor andel når de s.k. diffusa sänkorna och bryts ner på markytan eller perkolerar ner genom marken.</p><p>I fallstudien utreds även den troliga påverkan på vattenrecipienter och omkringliggande närmiljö. Förslag ges även på hur man på bästa sätt skall reducera den miljöpåverkan som den använda avisningsvätskan ger upphov till.</p>
6

Flygplansavisningens miljöpåverkan vid svenska flygplatser / The environmental Impact of Aircraft De-icing at Swedish Airports

Marklund, Lars January 2004 (has links)
The aim of this thesis was to answer a number of questions about the environmental consequences of aircraft de-icing. A further aim was to suggest how the environmental consequences due to the release of de-icing fluids can be measured and reduced. The main impact of the aircraft de-icing on the environment is due to the large oxygen demand for the degradation of glycol based de-icing fluids which are released in the environment. The effect of the increase in oxygen demand depends on where the degradation occurs in the ecosystem. In a sensitive ecosystem, the large demand of oxygen could provide an anaerobic environment which would be harmful for many types of organisms. In order to reduce the negative effects of the applied de-icing fluid, there is some type of collection system at every regular airport in Sweden. The methods of collection can be divided into two general groups, hydrological isolation or vacuum sweeper trucks. When the area used for hydrological isolation is relatively small it is called a central de-icing pad. This thesis investigates which methods are being used at 16 of the Swedish airports with the most intense de-icing activity. Of all of these airports, only one does not use vacuum sweeper trucks. Six of the airports use central de-icing pads and five use hydrological isolation of a larger area. The investigation of the efficiency of each method showed no significant differences. This is due to the lack of accurate measurements and that different measurement methods are employed at different airports. This thesis also examines which methods for measuring the efficiency are being used, their weaknesses and what alternatives methods are available. Suggestions are also given to minimize the environmental consequences of aircraft de-icing, taken into account both leakage of the de-icing fluid and its judicious use. The case study of Stockholm-Bromma Airport includes a more detailed investigation of the de-icing activities and a rough mass balance is established. The aim of establisheing the mass balance is to determine the extent of collection of the de-icing fluids, their runoff to the storm water system or arrival at a diffuse sink. The results show that even if the collection is low, only a small part of the de-icing fluids reaches the storm water system. A relatively large part goes to the diffuse sinks where the de-icing fluids degrade on the soil surface or percolate into the soil. In the case study there is also an investigation of the probable impacts on the surrounding environment due to aircraft de-icing at Stockholm-Bromma Airport and suggestions are made how to reduce the impact. / Det primära syftet med examensarbetet har varit att besvara ett antal frågeställningar om flygplansavisningens miljöpåverkan. Frågeställningar som behandlar hur flygplansavisningens miljöpåverkan kan uppskattas/mätas samt reduceras. Den miljöpåverkan flygplansavisningen ger upphov till består främst av den syreförbrukning som orsakas vid nedbrytningen av den använda, glykolbaserade avisningsvätskan. Hur stor denna miljöbelastning blir beror till stor del av var nedbrytningen äger rum. I känsliga ekosystem ger syreförbrukningen upphov till syrefattiga miljöer vilket många organismer missgynnas av. För att reducera de negativa effekter som använd avisningsvätska kan ge upphov till sker på alla svenska reguljära flygplatser ett tillvaratagande av den använda avisningsvätskan. Uppsamlingsmetoderna som används kan uppdelas i två huvudprinciper. Den ena är att området där avisning sker begränsas hydrologiskt och den andra metoden är uppsamling med ”vakuum-sugbil”. Den hydrologiska avgränsningen kan göras för ett mindre område och benämns då som en stationär avisningsyta. Examensarbetet utreder vilka uppsamlingsmetoder för avisningsvätska som används på 16 av de svenska flygplatserna med mest avisningsaktivitet. Vid de undersökta flygplatserna används ”vakuum-sugbil” på alla utom en. På sex av flygplatserna används stationära avisningsplatser och större avgränsade områden för uppsamling finns vid fem flygplatser. Ingen signifikant skillnad i effektivitet (uppsamlingsgrad) kunde påvisas mellan de olika uppsamlingsmetoderna. Anledningen till detta kan ligga i att mätmetoderna skiljer sig mellan de olika flygplatserna och att det är mycket svårt att mäta eller uppskatta effektiviteten. Examensarbetet utreder även vilka mätmetoder som används, deras svagheter samt alternativ till dessa. Förslag ges även till allmänna åtgärder för att minimera flygplansavisningens miljöpåverkan ur såväl utsläppssynpunkt som för resursåtgång. En fallstudie av Stockholm-Bromma flygplats ger en betydligt djupare utredning än för övriga flygplatser och en grov massbalans upprättas för den använda glykolen. Massbalansen upprättas för man ska kunna avgöra hur stora mängder av den använda avisningsvätskan som samlas upp, lämnar flygplatsen via dagvattennätet eller når diffusa sänkor. Resultaten visar att även om uppsamlingsgraden är liten är belastningen på dagvattennätet ringa. En stor andel når de s.k. diffusa sänkorna och bryts ner på markytan eller perkolerar ner genom marken. I fallstudien utreds även den troliga påverkan på vattenrecipienter och omkringliggande närmiljö. Förslag ges även på hur man på bästa sätt skall reducera den miljöpåverkan som den använda avisningsvätskan ger upphov till.
7

Assessment of IxLoad in an MPG Environment

Tang, Zhiqiang, Peng, Yue January 2013 (has links)
Long Term Evolution (LTE) is the latest mobile network technology published by the Third Generation Partnership Project (3GPP). It might become a dominant technology for the next generation and it is attract-ing a great deal of attention from the top global corporations. IxLoad is a real-world traffic emulator, designed by the test solution provider Ixia. Mobile Packet Gateway (MPG) has been developed by Ericsson and is a commercial network equipment to provide a smart interface between mobile network (Global System for Mobile Communi-cation (GSM), Wideband Code Division Multiple Access (WCDMA), LTE) and internet for operators’ network. In this thesis, MPG is utilized to assess the capacity and LTE functionality of IxLoad. Capacity estima-tion will verify the maximum simulated users that can be supported by IxLoad and will test the maximum throughput IxLoad can achieve with a particular number of simulated users, under conditions involving a particular application scenario such as browsing HTTP. In addition to Session Management some other features such as Track Area Update and Handover, Busy Hour Functionality, Deep Packet Inspection, Mul-tiple Access Point Names (APNs) and Dynamic Quality of Service Enforcement are also covered in the functionality assessment. Moreover, this thesis gives a brief introduction to Evolved Packet Sys-tem (EPS), Evolved Packet Core (EPC), and to the functionality of MPG in addition to the role of MPG in EPS. Meanwhile the newest features of IxLoad are also presented in this document. Finally, as the outcome of this thesis, several suggestions are proposed in relation to improve-ments for IxLoad and MPG.
8

Comparison of Multiple Models for Diabetes Using Model Averaging

Al-Mashat, Alex January 2021 (has links)
Pharmacometrics is widely used in drug development. Models are developed to describe pharmacological measurements with data gathered from a clinical trial. The information can then be applied to, for instance, safely establish dose-response relationships of a substance. Glycated hemoglobin (HbA1c) is a common biomarker used by models within antihyperglycemic drug development, as it reflects the average plasma glucose level over the previous 8-12 weeks. There are five different nonlinear mixed-effects models that describes HbA1c-formation. They use different biomarkers such as mean plasma glucose (MPG), fasting plasma glucose (FPG), fasting plasma insulin (FPI) or a combination of those. The aim of this study was to compare their performances on a population and an individual level using model averaging (MA) and to explore if reduced trial durations and different treatment could affect the outcome. Multiple weighting methods were applied to the MA workflow, such as the Akaike information criterion (AIC), cross-validation (CV) and a bootstrap model averaging method. Results show that in general, models that use MPG to describe HbA1c-formation on a population level could potentially outperform models using other biomarkers, however, models have shown similar performance on individual level. Further studies on the relationship between biomarkers and model performances must be conducted, since it could potentially lay the ground for better individual HbA1c-predictions. It can then be applied in antihyperglycemic drug development and to possibly reduce sample sizes in a clinical trial. With this project, we have illustrated how to perform MA on the aforementioned models, using different biomarkers as well as the difference between model weights on a population and individual level.
9

Topology Network Optimization of Facility Planning and Design Problems

Monga, Ravi Ratan Raj 29 October 2019 (has links)
The research attempts to provide a graphical theory-based approach to solve the facility layout problem. Which has generally been approached using Quadratic Assignment Problem (QAP) in the past, an algebraic method. It is a very complex problem and is part of the NP-Hard optimization class, because of the nonlinear quadratic objective function and (0,1) binary variables. The research is divided into three phases which together provide an optimal facility layout, block plan solution consisting of MHS (material handling solution) projected onto the block plan. In phase one, we solve for the position of departments in a facility based on flow and utility factor (weight for location). The position of all the departments is identified on the vertices of MPG (maximal planar graph), which maximizes the possibility of flow. We use named MPG produced in literature, throughout the research. The grouping of the department is achieved through GMAFLAD, a QSP (quadratic set packing) based optimizer. In Phase 2, the dual for the MPG’s is solved consisting of department location as per phase 1, to generate Voronoi graphs. These graphs are then, expanded by an ingenious parameter optimization formulation to achieve area fitting for individual cases. Optimization modeling software, Lingo17.0 is used for solving the parameter optimization for generating coordinates of the block plan. The plotting of coordinates for the block plan graphics is done via Autodesk inventor 2019. In phase 3, the solution for MHS is achieved using an RSMT (Rectilinear Steiner minimal tree) graph approach. The Voronoi seed coordinates produced through phase 2 results are computed by GeoSteiner package to generated the RSMT graph for projection onto the block plan (Also, done by Inventor 2019). The graphical method employed in this research, itself has complex and NP-hard problem segments in it, which have been relaxed to polynomial time complexity by fragmenting into groups and solving them in sections. Solving for MPG & RSMT are a class of NP-Hard problem, which have been restricted to N=32 here. Finally, to validate the research and its methodology a real-life case study of a shipyard building for the data set of PDVSA, Venezuela is performed and verified.
10

A Gasoline Demand Model For The United States Light Vehicle Fleet

Rey, Diana 01 January 2009 (has links)
The United States is the world's largest oil consumer demanding about twenty five percent of the total world oil production. Whenever there are difficulties to supply the increasing quantities of oil demanded by the market, the price of oil escalates leading to what is known as oil price spikes or oil price shocks. The last oil price shock which was the longest sustained oil price run up in history, began its course in year 2004, and ended in 2008. This last oil price shock initiated recognizable changes in transportation dynamics: transit operators realized that commuters switched to transit as a way to save gasoline costs, consumers began to search the market for more efficient vehicles leading car manufactures to close 'gas guzzlers' plants, and the government enacted a new law entitled the Energy Independence Act of 2007, which called for the progressive improvement of the fuel efficiency indicator of the light vehicle fleet up to 35 miles per gallon in year 2020. The past trend of gasoline consumption will probably change; so in the context of the problem a gasoline consumption model was developed in this thesis to ascertain how some of the changes will impact future gasoline demand. Gasoline demand was expressed in oil equivalent million barrels per day, in a two steps Ordinary Least Square (OLS) explanatory variable model. In the first step, vehicle miles traveled expressed in trillion vehicle miles was regressed on the independent variables: vehicles expressed in million vehicles, and price of oil expressed in dollars per barrel. In the second step, the fuel consumption in million barrels per day was regressed on vehicle miles traveled, and on the fuel efficiency indicator expressed in miles per gallon. The explanatory model was run in EVIEWS that allows checking for normality, heteroskedasticty, and serial correlation. Serial correlation was addressed by inclusion of autoregressive or moving average error correction terms. Multicollinearity was solved by first differencing. The 36 year sample series set (1970-2006) was divided into a 30 years sub-period for calibration and a 6 year "hold-out" sub-period for validation. The Root Mean Square Error or RMSE criterion was adopted to select the "best model" among other possible choices, although other criteria were also recorded. Three scenarios for the size of the light vehicle fleet in a forecasting period up to 2020 were created. These scenarios were equivalent to growth rates of 2.1, 1.28, and about 1 per cent per year. The last or more optimistic vehicle growth scenario, from the gasoline consumption perspective, appeared consistent with the theory of vehicle saturation. One scenario for the average miles per gallon indicator was created for each one of the size of fleet indicators by distributing the fleet every year assuming a 7 percent replacement rate. Three scenarios for the price of oil were also created: the first one used the average price of oil in the sample since 1970, the second was obtained by extending the price trend by exponential smoothing, and the third one used a longtime forecast supplied by the Energy Information Administration. The three scenarios created for the price of oil covered a range between a low of about 42 dollars per barrel to highs in the low 100's. The 1970-2006 gasoline consumption trend was extended to year 2020 by ARIMA Box-Jenkins time series analysis, leading to a gasoline consumption value of about 10 millions barrels per day in year 2020. This trend line was taken as the reference or baseline of gasoline consumption. The savings that resulted by application of the explanatory variable OLS model were measured against such a baseline of gasoline consumption. Even on the most pessimistic scenario the savings obtained by the progressive improvement of the fuel efficiency indicator seem enough to offset the increase in consumption that otherwise would have occurred by extension of the trend, leaving consumption at the 2006 levels or about 9 million barrels per day. The most optimistic scenario led to savings up to about 2 million barrels per day below the 2006 level or about 3 millions barrels per day below the baseline in 2020. The "expected" or average consumption in 2020 is about 8 million barrels per day, 2 million barrels below the baseline or 1 million below the 2006 consumption level. More savings are possible if technologies such as plug-in hybrids that have been already implemented in other countries take over soon, are efficiently promoted, or are given incentives or subsidies such as tax credits. The savings in gasoline consumption may in the future contribute to stabilize the price of oil as worldwide demand is tamed by oil saving policy changes implemented in the United States.

Page generated in 0.0533 seconds