Spelling suggestions: "subject:"bestest"" "subject:"testest""
81 |
Scheduling of Wafer Test Processes in Semiconductor ManufacturingLu, Yufeng 16 November 2001 (has links)
Scheduling is one of the most important issues in the planning of manufacturing systems. This research focuses on solving the test scheduling problem which arises in semiconductor manufacturing environment. Semiconductor wafer devices undergo a series of test processes conducted on computer-controlled test stations at various temperatures. A test process consists of both setup operations and processing operations on the test stations. The test operations occur in a specified order on the wafer devices, resulting in precedence constraints for the schedule. Furthermore, the assignment of the wafer devices to test stations and the sequence in which they are processed affects the time required to finish the test operations, resulting in sequence dependent setup times. The goal of this research is to develop a realistic model of the semiconductor wafer test scheduling problem and provide heuristics for scheduling the precedence constrained test operations with sequence dependent setup times.
A mathematical model is presented and two heuristics are developed to solve the scheduling problem with the objective of minimizing the makespan required to test all wafer devices on a set of test stations. The heuristic approaches generate a sorted list of wafer devices as a dispatching sequence and then schedule the wafer lots on test stations in order of appearance on the list.
An experimental analysis and two case studies are presented to validate the proposed solution approaches. In the two case studies, the heuristics are applied to actual data from a semiconductor manufacturing facility. The results of the heuristic approaches are compared to the actual schedule executed in the manufacturing facility. For both the case studies, the proposed solution approaches decreased the makespan by 23-45% compared to the makespan of actual schedule executed in the manufacturing facility. The solution approach developed in this research can be integrated with the planning software of a semiconductor manufacturing facility to improve productivity. / Master of Science
|
82 |
Futures-Based Forecasts of U.S. Crop PricesZhu, Jiafeng 03 October 2017 (has links)
Over the last decade, U.S. crop prices have become significantly more volatile. Volatile markets pose increased risks for the agricultural market participants and create a need for reliable price forecasts. Research discussed in this paper aims to find different approaches to forecast crop cash prices based on the prices of related futures contracts.
Corn, soybeans, soft red winter wheat, and cotton are the focus of this research. Since price data for these commodities is non-stationary, this paper used two approaches to solve this problem. The first approach is to forecast the difference in prices between current and future period and the second is to use the regimes. This paper considers the five-year moving average approach as the benchmark when comparing these approaches.
This research evaluated model performance using R-squared, mean errors, root mean squared errors, the modified Diebold-Mariano test, and the encompassing test. The results show that both the difference model and the regime model render better performance than the benchmark in most cases, but without a significant difference between each other. Based on these findings, the regime model was used to make forecasts of the cash prices of corn and soybeans, the difference model was used to make predictions for cotton, and the benchmark was used to forecast the SRW cash price. / Master of Science / This research attempts to develop models to forecast cash prices of corn, soybeans, wheat and cotton using the underlying futures prices. Two alternative approaches are proposed. The difference model focuses on forecasting the differences between current and future time prices. The regime model uses external data to determine potential structural breaks in price relationships. The out-of-sample performance of these models is compared to the benchmark of a five-year average using various performance criteria. The results show that the regime model performs better for corn and soybeans, while the difference model is the best one for cotton. For wheat, the results are mixed, but the benchmark seems to show better performance than the proposed models.
|
83 |
Validity of interpretation: a user validity perspective beyond the test scoreMacIver, R., Anderson, Neil, Costa, Ana-Cristina, Evers, A. 2014 April 1923 (has links)
Yes / This paper introduces the concept of user validity and provides a new perspective on the validity of interpretations from tests. Test interpretation is based on outputs such as test scores, profiles, reports, spread-sheets of multiple candidates’ scores, etc. The user validity perspective focuses on the interpretations a test user makes given the purpose of the test and the information provided in the test output. This innovative perspective focuses on how user validity can be extended to content, criterion and to some extent construct-related validity. It provides a basis for researching the validity of interpretations and an improved understanding of the appropriateness of different approaches to score interpretation, as well as how to design test outputs and assessments which are pragmatic and optimal.
|
84 |
A score test of homogeneity in generalized additive models for zero-inflated count dataNian, Gaowei January 1900 (has links)
Master of Science / Department of Statistics / Wei-Wen Hsu / Zero-Inflated Poisson (ZIP) models are often used to analyze the count data with excess zeros. In the ZIP model, the Poisson mean and the mixing weight are often assumed to depend on covariates through regression technique. In other words, the effect of covariates on Poisson mean or the mixing weight is specified using a proper link function coupled with a linear predictor which is simply a linear combination of unknown regression coefficients and covariates. However, in practice, this predictor may not be linear in regression parameters but curvilinear or nonlinear. Under such situation, a more general and flexible approach should be considered. One popular method in the literature is Zero-Inflated Generalized Additive Models (ZIGAM) which extends the zero-inflated models to incorporate the use of Generalized Additive Models (GAM). These models can accommodate the nonlinear predictor in the link function. For ZIGAM, it is also of interest to conduct inferences for the mixing weight, particularly evaluating whether the mixing weight equals to zero. Many methodologies have been proposed to examine this question, but all of them are developed under classical zero-inflated models rather than ZIGAM. In this report, we propose a generalized score test to evaluate whether the mixing weight is equal to zero under the framework of ZIGAM with Poisson model. Technically, the proposed score test is developed based on a novel transformation for the mixing weight coupled with proportional constraints on ZIGAM, where it assumes that the smooth components of covariates in both the Poisson mean and the mixing weight have proportional relationships. An intensive simulation study indicates that the proposed score test outperforms the other existing tests when the mixing weight and the Poisson mean truly involve a nonlinear predictor. The recreational fisheries data from the Marine Recreational Information Program (MRIP) survey conducted by National Oceanic and Atmospheric Administration (NOAA) are used to illustrate the proposed methodology.
|
85 |
Automatiska tester : Vilka är faktorerna till framgång vid införandet och användandet?Marsell Klaminder, Elin January 2016 (has links)
Vid utveckling och förvaltning av system är det viktigt att de testas för att upptäcka eventuella fel. Detta kan göras genom både manuella och automatiska tester. I de automatiska testerna används verktyg/program för att utföra testerna. Syftet med detta arbete är identifiera vilka faktorerna är för framgång vid införandet och användandet av automatiska tester. Detta har undersökts genom en teoretisk litteraturundersökning, samt en empirisk undersökning bland fyra svenska myndigheter kring hur de arbetar med automatiska tester, hur de anser att det fungerar, hur införandeprocessen gick till och vilka eventuella problem de upplevt. Detta för att kunna rekommendera andra organisationer om hur de kan införa automatiska tester på bästa sätt, samt för att ge lösningar till eventuella problem som kan uppstå vid användandet. Resultatet av undersökningen visar att via de automatiska testerna kan testarbetet utvidgas genom att fler tester kan genomföras på kort tid. Detta är särskilt användbart vid regressionstester där systemets funktionalitet kontrolleras efter att ändringar har gjorts. De automatiska testerna gör även att testarna kan fokusera på utförandet av komplexare och kreativa/roligare tester. När automatiska tester ska införas i en verksamhet finns det en hel del saker att tänka på för att minimera problemen och få ett lyckat resultat. Det är viktigt att ledningen är införstådd i varför automatiska tester ska införas och ger sitt stöd till processen. Personalen behöver få utbildning kring vad automatiska tester är och vad de kan användas till. Det är även viktigt att tänka till vid valet av verktyg gällande vad verktyget ska användas till, vilken budget som finns och vilka programmeringsspråk som kan användas. Slutligen så krävs en insikt i att det tar mycket arbete och tid för att bygga upp det automatiska testarbetet innan alla fördelar fås. Samt att testfallen och testdata måste underhållas regelbundet för att kunna användas på lång sikt.
|
86 |
FLIGHT TEST DATA ACQUISITION SYSTEMMaschhoff, Robert H., Johnson, David W. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper describes a data acquisition system with integral signal conditioning
capability. It is a distributed bus oriented system which greatly reduces the
amount of wiring and structural penetrations required in previous systems used
for this purpose. The system interfaces with virtually all of the transducer types
existing on operational aircraft as well as those typically used for flight testing
and proofing such as the strain gauges. It outputs data in digital form to a central
unit which combines this data with other aircraft operational parameters for
recording on tape or telemetry to the ground.
The system consists of a remote multiplexer (RMUX) which provides the
formatting and central processing functions and has provision for 16 plug-in
signal conditioning modules. It also has provision for up to 20 external
multiplexers (EMUXes) which are designed to service a cluster of like sensors in
a local area. EMUX types include bridge, thermocouple, and a highly integrated
pressure scanner unit. Signal conditioning and processing functions include
input transient protection, variable blocks of gain, analog pre-sample filtering,
and precision bandlimiting using digital techniques .
The penalty for moving the acquisition units to remote locations on the aircraft
as compared to previously used cabin mounted equipment is a much more severe
environment. Temperature extremes and vibration are particularly severe
around the engines. Because of the planned use on operational aircraft,
provisions to prevent lightning propagation to the cabin are a significant future.
|
87 |
Elaboración y validación del test de resiliencia para Personal Subalterno de las Fuerzas ArmadasMedina Oviedo, Samantha January 2012 (has links)
La tesis aborda como problema de investigación la confiabilidad y validez del test denominado
Resiliencia para personal subalterno de las Fuerzas Armadas. Para responder a esta interrogante se ha efectuado una investigación de tipo psicométrica con método descriptivo correlacional y diseño transversal.
Se utilizó en la investigación el Testo de Resiliencia para personal subalterno, basado en la teoría de Rutter y Woling. La validez se realizó utilizando el método de criterio de experto o validez de contenido y la validez de constructo. La confiabilidad o comprobación del error de medida se obtuvo utilizando el método de consistencia interna de Cronbachm encontrando coeficientes mayores de .77. Los puntajes directos fueron convertidos en percentiles.
La muestra de estudio es de tipo probabilístico y constituida por 381 miembros del personal subalterno de las Fuerzas Armadas.
El estudio encontró que el test de Resiliencia para el personal subalterno de las Fuerzas
Armadas presenta significativa validez de contenido y confiabilidad dentro de los márgenes de error permitidos y una confiabilidad significativa por el método de la consistencia interna y test retest para el personal subalterno de las Fuerzas Armadas.
|
88 |
The measurement of electrical parameters and trace impurity effects in MOS capacitorsMcGillivray, Ian Grant January 1987 (has links)
No description available.
|
89 |
An investigation of the market efficiency of the Nairobi Securities ExchangeNjuguna, Josephine M. 10 1900 (has links)
This study tests for the market efficiency of the Nairobi Securities Exchange (NSE) after the year 2000 to determine the effect of technological advancements on market efficiency. Data that is used is the NSE 20 share index over the period 2001 to 2015; and the NSE All Share Index (NSE ASI) from its initiation during 2008 to 2015. We cannot accept the Efficient Market Hypothesis (EMH) for the NSE using the serial correlation test, the unit root tests and the runs test. However, we can accept the EMH for the more robust variance ratio test. Overall, the results of the market efficiency are mixed. The most significant finding is that the efficiency of the NSE has increased since the year 2000 which suggests that advancements in technology have contributed to the increase in the market efficiency of the NSE. / Business Management / M. Com. (Business Management)
|
90 |
Optimized Automatic Calibration Tool for Flight Test Analogue ParametersTorralbo, Pilar Vicaria 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / The calibration processes consume a big quantity of resources: equipment and people, time and cost. As the number of calibration points increase the resources increase in the same extent. This automatic tool, aimed to reduce these resources, has been designed for commanding, managing and analyzing in real time a large number of acquired data points coming from the specimen under calibration and the standards used in the calibration process, applying at the same time the metrological algorithms which validate the calibration point. Its greatest achievement is the implementation of the rules for accepting or discarding the data point and the level of automation of the process. In the last flight test campaign its usage has been crucial for providing on time the data with the high accuracy required. It was achieved the commissioning of almost 200 temperature parameters in a short period of time taking advantage of equipment which nominal accuracy was not high enough for their direct application.
|
Page generated in 0.0342 seconds