• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 106
  • 87
  • 70
  • 29
  • 26
  • 24
  • 14
  • 13
  • 12
  • 12
  • 10
  • 8
  • 6
  • 4
  • 2
  • Tagged with
  • 450
  • 124
  • 77
  • 77
  • 75
  • 60
  • 56
  • 49
  • 42
  • 40
  • 39
  • 39
  • 36
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Development of statistical methods for the surveillance and monitoring of adverse events which adjust for differing patient and surgical risks

Webster, Ronald A. January 2008 (has links)
The research in this thesis has been undertaken to develop statistical tools for monitoring adverse events in hospitals that adjust for varying patient risk. The studies involved a detailed literature review of risk adjustment scores for patient mortality following cardiac surgery, comparison of institutional performance, the performance of risk adjusted CUSUM schemes for varying risk profiles of the populations being monitored, the effects of uncertainty in the estimates of expected probabilities of mortality on performance of risk adjusted CUSUM schemes, and the instability of the estimated average run lengths of risk adjusted CUSUM schemes found using the Markov chain approach. The literature review of cardiac surgical risk found that the number of risk factors in a risk model and its discriminating ability were independent, the risk factors could be classified into their "dimensions of risk", and a risk score could not be generalized to populations remote from its developmental database if accurate predictions of patients' probabilities of mortality were required. The conclusions were that an institution could use an "off the shelf" risk score, provided it was recalibrated, or it could construct a customized risk score with risk factors that provide at least one measure for each dimension of risk. The use of report cards to publish adverse outcomes as a tool for quality improvement has been criticized in the medical literature. An analysis of the report cards for cardiac surgery in New York State showed that the institutions' outcome rates appeared overdispersed compared to the model used to construct confidence intervals, and the uncertainty associated with the estimation of institutions' out come rates could be mitigated with trend analysis. A second analysis of the mortality of patients admitted to coronary care units demonstrated the use of notched box plots, fixed and random effect models, and risk adjusted CUSUM schemes as tools to identify outlying hospitals. An important finding from the literature review was that the primary reason for publication of outcomes is to ensure that health care institutions are accountable for the services they provide. A detailed review of the risk adjusted CUSUM scheme was undertaken and the use of average run lengths (ARLs) to assess the scheme, as the risk profile of the population being monitored changes, was justified. The ARLs for in-control and out-of-control processes were found to increase markedly as the average outcome rate of the patient population decreased towards zero. A modification of the risk adjusted CUSUM scheme, where the step size for in-control to out-of-control outcome probabilities were constrained to no less than 0.05, was proposed. The ARLs of this "minimum effect" CUSUM scheme were found to be stable. The previous assessment of the risk adjusted CUSUM scheme assumed that the predicted probability of a patient's mortality is known. A study of its performance, where the estimates of the expected probability of patient mortality were uncertain, showed that uncertainty at the patient level did not affect the performance of the CUSUM schemes, provided that the risk score was well calibrated. Uncertainty in the calibration of the risk model appeared to cause considerable variation in the ARL performance measures. The ARLs of the risk adjusted CUSUM schemes were approximated using simulation because the approximation method using the Markov chain property of CUSUMs, as proposed by Steiner et al. (2000), gave unstable results. The cause of the instability was the method of computing the Markov chain transition probabilities, where probability is concentrated at the midpoint of its Markov state. If probability was assumed to be uniformly distributed over each Markov state, the ARLs were stabilized, provided that the scores for the patients' risk of adverse outcomes were discrete and finite.
362

Les nouvelles méthodes de navigation durant le Moyen Age / New navigational Methods during the Middle Ages

Com'Nougue, Michel 29 November 2012 (has links)
Les nouvelles méthodes de navigation durant le Moyen Age. Le navire de commerce à voile est propulsé par le vent et doit donc suivre cette direction générale. La navigation peut se définir selon un aspect d’abord stratégique comme le choix d’une route en tenant compte des contraintes imposées par le vent et un aspect tactique concernant le tracé et le contrôle, en cours d’exécution de cette route. 1-Dans un premier temps, la navigation antique ne se réfère qu’au seul vent qui est le moteur mais aussi le guide du navigateur pour suivre la route fixée par l’observation des traces qu’il imprime sur la mer. C’est la navigation à vue. La limite de la méthode est atteinte quand le vent devient changeant au large, ce qui oblige alors une vérification de la direction par l’observation des astres. 2- L’apparition de l’aiguille aimantée résout en partie ce problème. L’orientation géographique entraine la mise au point, à la fin du XIIIe siècle, d’une nouvelle méthode : l’estime. L’estime est la résolution graphique des problèmes que pose le contrôle de la route choisie. Cette résolution suppose, d’une part, l’usage de la boussole et d’une orientation géographique et, d’autre part, une analyse vectorielle sur un support la carte marine qui est donc indissociable de la méthode. Le plus gros défaut de l’estime est que les positions sont définies par projection dans le futur de paramètres, cap et distances parcourues actuels. Des différences sont donc à prévoir qui entrainent une zone d’incertitude sur le point estimé. 3- Lorsqu’au début du XVe siècle les navigateurs se lancent dans l’inconnu, obligés de suivre le vent qui décrit des boucles, les voyages s’allongent sans voir la terre pour une confrontation avec des positions avérées. La taille des zones d’incertitude obligent le navigateur a préciser sa position finale par d’autres méthodes basées sur des observations astronomiques. On peut distinguer deux méthodes : Tout d’abord, la méthode des hauteurs de polaire, de 1433 à 1480 environ, qui permet de finaliser la volta et d’effectuer un atterrissage selon une route Est-Ouest. L’analyse de la technique nautique de Colomb, qui utilise cette méthode, est très semblable à celle décrite par Ibn Majid dans son traité de navigation. Il est probable qu’il y a eu transmission sans pouvoir préciser les circonstances exactes.Mais dès que les navigateurs franchissent l’équateur la polaire devient indisponible, les navigateurs doivent observer le soleil. Cette deuxième méthode est plus délicate car les paramètres du soleil changent chaque jour. Ils obligent donc le navigateur à calculer la latitude, à partir de l’observation de la méridienne de soleil et par l’usage de tables des données solaire : os regimentos do sol. C’est cette méthode qui permet à Vasco da Gama de doubler le cap de Bonne Esperance, en 1498, ce qui marque la fin de la période étudiée. Pour conclure il faut remarquer que ces deux derniers méthodes sont le fruit d’une coopération entre les usagers et les scientifiques sous l’égide du pouvoir, décidé à atteindre le but fixé. C’est donc le fruit d’une véritable recherche scientifique. En second lieu, il faut également noter que les progrès de la navigation accompagnent des progrès parallèles en architecture navale, le gouvernail d’étambot, ainsi que de nouvelles procédures dans le commerce maritime. L’étude des interactions entre ces divers domaines reste à faire. / New navigational methods during the Middle Ages.A sailing vessel is pushed forwards by the wind in the general direction towards it is blowing. Navigation should comply with strategic goals: i.e. the choice of a route to a port of destination, taking into account this wind constraint. A tactical aspect is involved when following this route and checking, the entire voyage long, the good guidance of the ship. 1-In the first ages of navigation, the mariner is referring to the sole element at his disposal: the wind. It gives him elements for the direction to choose, if it is a convenient time for sailing and also it supplies the means of checking and controlling the course of the ship, by observation of the marks it is printing on the surface of the sea. Variable wind is the limit of this method. In this case, only sky observation can give an indication of the direction to follow.2- The finding of the magnetic needle solves this problem and from this new tool, a new navigation method is implemented, around the end of the XIII.th century. Dead reckoning is a way to determinate ship’s position at any moment, using a vector analysis for solving graphically the problems that checking the chosen course can induce. This graphical method is using the compass indications and needs necessarily using a marine chart. The main problem of dead reckoning is that, using present data to reckon future positions , any error in assessing these data supposes an uncertainty in this position. Correction of the route is necessary by verifying with actual land falls. Longer the voyage without such confrontation and bigger the uncertainty zone to be faced.3-In the beginning of the XV.th century, Portuguese mariners started to run the open ocean. They had to follow the wind which runs along a long loop across the ocean, la volta. Therefore running in the open seas, without any land to be seen, in order to check the actual position, obliged mariners to elaborate new methods based on astronomical observations in order to reduce the size of this uncertainty zone, when arriving to the landing point. A first method is based on the observation of the pole star depth; between the years 1433 to 1480. It is based on observation of the pole star depth. Analysis of C. Columbus nautical art shows similarities with the written work of Ibn Majid, his contemporaneous Arab nautical expert. Crossing the equator line made the polar star not available any more. Therefore, the method had to be changed and the second method involved sun observations. This is more complex as the sun data are changing every day. Therefore mariners had to reckon the latitude, using the observations of the meridian line and using of sun data tables: the so called regimentos do sol. Through this method Vasco da Gama was able to reach the Indian Ocean after passing the Cape of Good Hope. This closes the period of this study.The conclusion should take into account the fact that these astronomical methods were not entirely empiric but the result of a joint research of users, mariners and scientists. This endeavor was made possible because a central power, the Infant first , then King Joao II, were willing to proceed more south and gave their mariners the technical means to do so.A second conclusion observes that progress of navigation were accompanied by parallels progresses in naval construction and maritime new contracts and ways of handling commercial matters. There are surely interactions between these three domains, but we have still to put them into evidence.
363

A matemática e os circuitos elétricos de corrente contínua : uma abordagem analítica, prático-experimental e computacional

Costa, Ricardo Ferreira da January 2007 (has links)
Este trabalho trata do desenvolvimento de um material didático, sob a forma de cadernos (presentemente, em forma de capítulos), acompanhado de protótipo de circuito simples para testes experimentais, a ser utilizado no ensino de nível médio. O conteúdo reunido nos cadernos abrange o desenvolvimento analítico de tópicos pertinentes à física-matemática, esquema para a construção do protótipo e exemplos utilizando recursos computacionais. Mais especificamente, buscou-se enfatizar o ensino dos tópicos de equações e sistemas lineares, motivados por fenômenos físicos. Pretendeu-se explorar o aspecto experimental (com a construção e o uso de protótipo de circuitos simples), o analítico (com a resolução de equações e sistemas lineares, e com uma introdução à programação linear) e o computacional (com uso da planilha eletrônica). Em todos os conteúdos desenvolvidos, é dada especial ênfase à interpretação, à análise e à validação dos resultados. Com este material, procura-se oferecer ao professor um conjunto de atividades didático-pedagógicas, que possam estimular a sua atuação crítica e criativa. E que, também, propiciem a reflexão e a análise na identificação e resolução de problemas, a fim de desencadear processos cognitivos que levem o aluno a compreender as interrelações entre a física e a matemática. / This paper is about the development of a didatic material, under the way of notebooks (here, in chapters), accompained by the prototype of simple circuit for experimental tests, to be used in high school teaching. The issue brought in the notebooks comprehends the analytic development of the topics that belong to the physics- mathematics, scheme for the building of the prototype and examples with the use of computer resources. More specifically, applied to the teaching of the topics of equations and linear systems, motivated by physics phenomena. It was intended to explore the experimental aspect (with the building of simple circuit prototype), the analytic (with the resolution of equations and linear systems, with an introduction to the linear programming) and the computer (with the use of electronic chart). In all the topics developed, a special emphasis is given to the interpretation, analysis and validating of the results. With this material, it was intended to offer the teacher a set of didatic- pedagogical activities that can stimulate the critical and creative acting. And that can also provide the thinking and analysis in the identification and resolution of problems, with the aim of triggering cognitive processes that lead the student to understand the inter-relations between physics and mathematics.
364

Guidelines for a remedial reading programme for standard one and two pupils

Nel, Norma 01 1900 (has links)
A synopsis of the importance and the nature of reading serve as the point of departure for this study. The pupils involved are learning restrained as well as A comprehensive reading problem analysis table, compiled for analysis of individual reading problems, facilitates identification of the remedial reading areas, as well as the underlying subskills causing the problems to be accommodated in remedial reading. A control chart, developed for recording the information concerning the pupil's reading problem area and underlying subskills, facilitates compilation of an integrated remedial reading programme. Existing exercises, selected from the works of various authors and adapted, provide guidelines and exercises for particular remedial reading areas. These guidelines serve as a point of departure for the compilation of a specific remedial reading programme for a particular pupil with reading problems. Two case studies elucidate how a remedial reading programme can be compiled according to the pupil's background, reading problems and inadequacies in the underlying subskills. Group A learning disabled pupils although learning disabled pupils in Group B and C can also be involved. The total reading process is illustrated by means of a reading model. The two main components, namely, word identification and comprehension, form the basis of this study. The different subcategories featuring in each component are highlighted. This model serves as a framework for the diagnosis and remediation of reading problems. A teaching model is used to illustrate the complexity of teaching. The factors ( within the teaching model are indicated, as well as the ways they may serve when reading is taught. The reduction and choice of reading content for a specific pupil are set out as important aspects to be taken into consideration in reading remediation. Determining each pupil's reading levels, namely, his/her independent level, instructional level and frustrational level, enables the teacher to choose the appropriate reading material. / Teacher Education / D. Ed. (Orthopedagogics)
365

A matemática e os circuitos elétricos de corrente contínua : uma abordagem analítica, prático-experimental e computacional

Costa, Ricardo Ferreira da January 2007 (has links)
Este trabalho trata do desenvolvimento de um material didático, sob a forma de cadernos (presentemente, em forma de capítulos), acompanhado de protótipo de circuito simples para testes experimentais, a ser utilizado no ensino de nível médio. O conteúdo reunido nos cadernos abrange o desenvolvimento analítico de tópicos pertinentes à física-matemática, esquema para a construção do protótipo e exemplos utilizando recursos computacionais. Mais especificamente, buscou-se enfatizar o ensino dos tópicos de equações e sistemas lineares, motivados por fenômenos físicos. Pretendeu-se explorar o aspecto experimental (com a construção e o uso de protótipo de circuitos simples), o analítico (com a resolução de equações e sistemas lineares, e com uma introdução à programação linear) e o computacional (com uso da planilha eletrônica). Em todos os conteúdos desenvolvidos, é dada especial ênfase à interpretação, à análise e à validação dos resultados. Com este material, procura-se oferecer ao professor um conjunto de atividades didático-pedagógicas, que possam estimular a sua atuação crítica e criativa. E que, também, propiciem a reflexão e a análise na identificação e resolução de problemas, a fim de desencadear processos cognitivos que levem o aluno a compreender as interrelações entre a física e a matemática. / This paper is about the development of a didatic material, under the way of notebooks (here, in chapters), accompained by the prototype of simple circuit for experimental tests, to be used in high school teaching. The issue brought in the notebooks comprehends the analytic development of the topics that belong to the physics- mathematics, scheme for the building of the prototype and examples with the use of computer resources. More specifically, applied to the teaching of the topics of equations and linear systems, motivated by physics phenomena. It was intended to explore the experimental aspect (with the building of simple circuit prototype), the analytic (with the resolution of equations and linear systems, with an introduction to the linear programming) and the computer (with the use of electronic chart). In all the topics developed, a special emphasis is given to the interpretation, analysis and validating of the results. With this material, it was intended to offer the teacher a set of didatic- pedagogical activities that can stimulate the critical and creative acting. And that can also provide the thinking and analysis in the identification and resolution of problems, with the aim of triggering cognitive processes that lead the student to understand the inter-relations between physics and mathematics.
366

[en] DOUBLE-SAMPLING CONTROL CHARTS FOR ATTRIBUTES / [pt] GRÁFICOS DE CONTROLE POR ATRIBUTOS COM AMOSTRAGEM DUPLA

AURELIA APARECIDA DE ARAUJO 25 August 2005 (has links)
[pt] Nesta tese é proposta a incorporação da estratégia de amostragem dupla, já utilizada em inspeção de lotes, ao gráfico de controle de np (número de defeituosos), com o objetivo de aumentar a sua eficiência, ou seja, reduzir o número médio de amostras até a detecção de um descontrole (NMA1), sem aumentar o tamanho médio de amostra (TMA) nem reduzir o número médio de amostras até um alarme falso (NMA0). Alternativamente, este esquema pode ser usado para reduzir o custo de amostragem do gráfico de np, uma vez que para obter o mesmo NMA1 que um gráfico de np com amostragem simples, o gráfico com amostragem dupla requererá menor tamanho médio de amostra. Para vários valores de p0 (fração defeituosa do processo em controle) e p1 (fração defeituosa do processo fora de controle), foi obtido o projeto ótimo do gráfico, ou seja, aquele que minimiza NMA1, tendo como restrições um valor máximo para TMA e valor mínimo para NMA0. O projeto ótimo foi obtido para vários valores dessas restrições. O projeto consiste na definição dos dois tamanhos de amostra, para o primeiro e o segundo estágios, e de um conjunto de limites para o gráfico. Para cada projeto ótimo foi também calculado o valor de NMA1 para uma faixa de valores de p1, além daquele para o qual o projeto foi otimizado. Foi feita uma comparação de desempenho entre o esquema desenvolvido e outros esquemas de monitoramento do número de defeituosos na amostra: o clássico gráfico de np (com amostragem simples), o esquema CuSum, o gráfico de controle de EWMA e o gráfico np VSS (gráfico adaptativo, com tamanho de amostra variável). Para a comparação, foram obtidos os projetos ótimos de cada um desses esquemas, sob as mesmas restrições e para os mesmos valores de p0 e p1. Assim, uma contribuição adicional dessa tese é a análise e otimização do desempenho dos esquemas CuSum, EWMA e VSS para np. O resultado final foi a indicação de qual é o esquema de controle de processo mais eficiente para cada situação. O gráfico de np com amostragem dupla aqui proposto e desenvolvido mostrou ser em geral o esquema mais eficiente para a detecção de aumentos grandes e moderados na fração defeituosa do processo, perdendo apenas para o gráfico VSS, nos casos em que p0, o tamanho (médio) de amostra e o aumento em p0 (razão p1/p0) são todos pequenos. / [en] In this thesis, it is proposed the incorporation of the double-sampling strategy, used in lot inspection, to the np control chart (control chart for the number nonconforming), with the purpose of improving its efficiency, that is, reducing the out-of-control average run length (ARL1), without increasing the average sample size (ASS) or the in-control average run length (ARL0). Alternatively, this scheme can be used to reduce the np chart sampling costs, since that in order to get the same ARL1 of the single-sampling np chart, the doublesampling chart will require smaller average sample size. For a number of values of p0 (in-control defective rate of the process) and p1 (out-of-control defective rate of the process), the optimal chart designs were obtained, namely the designs that minimize ARL1, subject to maximum ASS and minimum ARL0 constraints. Optimal designs were obtained for several values of these constraints. The design consists of two sample sizes, for the first and second stages, and a set of limits for the chart. For each optimal design the value of ARL1 was also computed for a range of p1 values besides the one for which the design ARL1 was minimized. A performance comparison was carried out between the proposed scheme and the classical (single-sampling) np chart, the CuSum np scheme, the EWMA np control chart and the VSS np chart (the variable sample size control chart). For comparison, optimal designs for each scheme were considered, under same constraints and values of p0 and p1. An additional contribution of this thesis is the performance analysis and optimization of the np CuSum, EWMA and VSS schemes. The final result is the indication of the most efficient process control scheme for each situation. The double-sampling np control chart here proposed and developed has proved to be in general the most efficient scheme for the detection of large and moderate increases in the process fraction defective, being only surpassed by the VSS chart in the cases in which p0, the (average) sample size and the increase in p0 (p1/p0 ratio) are all small.
367

[en] XBAR CHART WITH ESTIMATED PARAMETERS: THE AVERAGE RUN LENGTH DISTRIBUTION AND CORRECTIONS TO THE CONTROL LIMITS / [pt] GRÁFICO XBARRA COM PARÂMETROS ESTIMADOS: A DISTRIBUIÇÃO DA TAXA DE ALARMES E CORREÇÕES NOS LIMITES

FELIPE SCHOEMER JARDIM 31 July 2018 (has links)
[pt] Os gráficos de controle estão entre as ferramentas indispensáveis para monitorar o desempenho de um processo em várias indústrias. Quando estimativas de parâmetros são necessárias para projetar esses gráficos, seu desempenho é afetado devido aos erros de estimação. Para resolver esse problema, no passado, pesquisadores avaliavam o desempenho desses métodos em termos do valor esperado do número médio de amostras até um alarme falso condicionado às estimativas dos parâmetros (denotado por 𝐶𝐴𝑅𝐿0). No entanto, esta solução não considera a grande variabilidade do 𝐶𝐴𝑅𝐿0 entre usuários. Então, recentemente, surgiu a ideia de medir o desempenho dos gráficos de controle usando a probabilidade de o 𝐶𝐴𝑅𝐿0 ser maior do que um valor especificado – que deve estar próximo do desejado nominal. Isso é chamado de Exceedance Probability Criterion (EPC). Para aplicar o EPC, a função de distribuição acumulada (c.d.f.) do 𝐶𝐴𝑅𝐿0 é necessária. No entanto, para um dos gráficos de controle mais utilizados, o gráfico Xbarra, também conhecido como gráfico x (sob a suposição de distribuição normal), a expressão matemática da c.d.f. não está disponível na literatura. Como contribuição nesse sentido, o presente trabalho apresenta a derivação exata da expressão matemática da c.d.f. do 𝐶𝐴𝑅𝐿0 para três possíveis casos de estimação de parâmetros: (1) quando a média e o desvio-padrão são desconhecidos, (2) quando apenas a média é desconhecida e (3) quando apenas o desvio-padrão é desconhecido. Assim, foi possível calcular o número mínimo de amostras iniciais, m, que garantem um desempenho desejada do gráfico em termos de EPC. Esses resultados mostram que m pode assumir valores consideravelmente grandes (como, por exemplo, 3.000 amostras). Como solução, duas novas equações são derivadas aqui para ajustar os limites de controle garantindo assim um desempenho desejado para qualquer valor de m. A vantagem dessas equações é que uma delas fornece resultados exatos enquanto a outra dispensa avançados softwares de computador para os cálculos. Um estudo adicional sobre o impacto desses ajustes no desempenho fora de controle (OOC) fornece tabelas que ajudam na decisão do melhor tradeoff entre quantidade adequada de dados e desempenhos IC e OOC preferenciais do gráfico. Recomendações práticas para uso desses resultados são aqui também fornecidas. / [en] Control charts are among the indispensable tools for monitoring process performance in various industries. When parameter estimation is needed to design these charts, their performance is affected due to parameter estimation errors. To overcome this problem, in the past, researchers have evaluated the performance of control charts and designed them in terms of the expectation of the realized in-control (IC) average run length (𝐶𝐴𝑅𝐿0). But, as pointed recently, this solution does not account for what is known as the practitioner-to-practitioner variability (i.e., the variability of 𝐶𝐴𝑅𝐿0). So, a recent idea emerged where control chart performance is measured by the probability of the 𝐶𝐴𝑅𝐿0 being greater than a specified value - which must be close to the nominal desired one. This is called the Exceedance Probability Criterion (EPC). To apply the EPC, the cumulative distribution function (c.d.f.) of the 𝐶𝐴𝑅𝐿0 is required. However, for the most well-known control chart, named the two-sided Shewhart Xbar (or simply X) Chart (under normality assumption), the mathematical c.d.f. expression of the 𝐶𝐴𝑅𝐿0 is not available in the literature. As a contribution in this respect, the present work presents the derivation of the exact c.d.f. expression of the 𝐶𝐴𝑅𝐿0 for three cases of parameters estimation: (1) when both the process mean and standard deviation are unknown, (2) when only the mean is unknown and (3) when only the standard deviation is unknown. Using these key results, it was possible to calculate the exact minimum number of initial (Phase I) samples (m) that guarantees a desired in-control performance in terms of the EPC. These results show that m can be prohibitively large (such as 3.000 samples). As a solution to this problem, two new equations are derived here to adjust the control limits to guarantee a desired in-control performance in terms of the EPC for any given value of m. The advantage of these equations (compared to the existing adjustments methods) is that one provides exact results and the other one does not require too many computational resources to perform the calculations. A further study about the impact of these adjustments on the out-of-control (OOC) performance provides useful tables to decide the appropriate amount of data and the adjustments that corresponds to a user preferred tradeoff between the IC and OOC performances of the chart. Practical recommendations for using these findings are also provided in this research work.
368

Pesquisa-ação sobre os fatores de sucesso para implantação e continuidade do uso de cartas de controle estatístico de processo em uma planta química no Brasil

Santana Júnior, Manoel Bispo de 14 February 2014 (has links)
Made available in DSpace on 2016-06-02T19:52:07Z (GMT). No. of bitstreams: 1 6178.pdf: 16161838 bytes, checksum: 8dbaf6d8dec135d4f9551d351b49ca5a (MD5) Previous issue date: 2014-02-14 / Statistical process control charts (SPC charts) were created in the 20th century in the United States and since then, they have been used for various purposes in different industries and services. SPC chart utilization was widespread in Japan in the postwar period and its effective use in Brazil occurred from the 90´s. Although several examples of use, there are still some industries where deployment of SPC and its sustained use in shopfloor were not successful. In the chemical industry, its use in the shopfloor is controversial and there is no many studies focused on this industry to explain the factors that can impact the successful implementation of statistical control charts by operational teams. This dissertation aims to investigate, through an action research, which are the critical factors for a successful implementation and sustained use of SPC charts in a chemical. It was performed a literature review to gather studies about success factors to implement SPC chart in industries. Using action research, those critical success factors were tested in practice with operators in the chemical plant of a multinational enterprise located in Brazil. This chemical plant has experimented to implement SPC chart in the past with no success. As a result of the action research, we succeed to implement SPC chart in the industrial unit and reduce variability with savings of US$ 150,000.00 per year energy. SPC chart use was well received by operators and leadership, showing that some success factors are really critical for a correct SPC implementation in a brazilian chemical plant. The results were used to develop a roadmap for SPC, with details about how and when to work each success factor. Afterwards, Roadmap was used in another eight different industrial units of the chemical company, all with success. / As cartas de controle estatístico de processo (cartas CEP) foram criadas no século 20 nos Estados Unidos e desde então vêm sendo usadas para diversas finalidades em diferentes setores da indústria e serviços. Seu uso foi difundido no Japão no pós guerra e no Brasil seu uso efetivo ocorreu a partir da década de 90. Apesar de vários exemplos de uso, ainda há para alguns ramos da indústria o desafio de implantar as cartas CEP no chão de fábrica e manter o seu uso pelas equipes. Nas indústrias químicas, seu uso no chamado chão de fábrica é controverso e são poucos os estudos voltados para esse tipo de indústria que expliquem os fatores que impactam no sucesso da implantação das cartas de controle estatístico. Essa dissertação tem como objetivo, investigar, através de uma pesquisa ação, quais fatores são preponderantes no sucesso da implantação e uso continuado das cartas CEP em uma planta química no Brasil. Foi realizada pesquisa bibliográfica sobre os estudos já realizados em diferentes tipos de indústria e coletados os fatores de sucesso identificados. Em seguida, esses fatores que impactam no sucesso da implantação da carta CEP foram testados na prática em uma planta química de uma empresa multinacional instalada no Brasil há muitos anos e que no passado não teve êxito na adoção das cartas CEP. Como resultado, conseguiu-se implantar as cartas de controle estatístico e se garantir a continuidade da sua utilização pelas equipes operacionais. Na planta química pesquisada, reduziu-se a variabilidade da umidade de um produto na seção de filtração resultando em um ganho de 150 mil dólares por ano em consumo de energia para a empresa. O uso da carta CEP foi bem recebido pelos operadores e lideranças. Os resultados obtidos serviram para elaborar um roteiro de implantação das cartas CEP, que detalha para cada fator de sucesso quais ações práticas devem ser feitas e em qual sequência. O roteiro foi posteriormente utilizado em mais oito pilotos de diferentes unidades industriais da própria empresa pesquisada, todos com sucesso.
369

Detecção de outlier como suporte para o controle estatístico do processo multivariado: um estudo de caso em uma empresa do setor plástico.

Almeida Júnior, José de 29 August 2013 (has links)
Made available in DSpace on 2015-05-08T14:53:25Z (GMT). No. of bitstreams: 1 ArquivoTotalJoseAlmeida.pdf: 1891145 bytes, checksum: 15212c0ee3aea31416abaeb33cac710c (MD5) Previous issue date: 2013-08-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The research project studied, aimed to apply a forward search algorithm to aid decision making in multivariate statistical process control in the manufacture of crates in a company of plastic products. Besides, the use of principal components analysis (PCA) and the Hotelling T square chart can summarize relevant information of this process. Thus, they were produced two results of considerable importance: the scores of the principal components and an adapted Hotelling T square chart, highlighting the relationship between the ten variables analyzed. The forward search algorithm detects discordant points of the data clustering rest that, when are too far away or have very different characteristics, are called outliers. The BACON algorithm was used for the detection of such occurrences, which part of a small subset demonstrably free of the original data outliers and it goes adding new information, which is not outliers, to this initial subset until no information can more be absorbed. One of the advantages of using this algorithm is that it combats the masking and swamping phenomena that alter the mean and covariance estimates. The research results showed that, for the dataset studied, the BACON algorithm did not detected no dissenting point. A simulation was then developed, using a uniform distribution by obtaining random numbers within a range for modifying the mean and standard deviation values, in order to show that this method is effective in detecting these outliers. For this simulation, they were randomly changed 5% of the mean and the standard deviation values of the original data. The result of this simulation showed that the BACON algorithm is perfectly applicable to this case study, being indicated its use in other processes that simultaneously depend on several variables. / O projeto de pesquisa estudado teve o objetivo de aplicar um algoritmo de busca sucessiva para o auxílio à tomada de decisão no controle estatístico do processo multivariado, na fabricação de garrafeiras em uma empresa de produtos plásticos. Além disso, a utilização das técnicas de análise de componentes principais (ACP) e da carta T² de Hotelling pode sumarizar parte das informações relevantes desse processo. Produziram-se então dois resultados de considerável importância: os escores dos componentes principais e um gráfico T² de Hotelling adaptado, evidenciando a relação entre as dez variáveis analisadas. O algoritmo de busca sucessiva detecta pontos discordantes do restante do agrupamento de dados que, quando se encontram muito distantes ou têm características muito diferentes, são denominados outliers. O algoritmo BACON foi utilizado para a detecção de tais ocorrências, o qual parte de um pequeno subconjunto, comprovadamente livre de outliers, dos dados originais e vai adicionando novas informações, que também não são outliers, a esse subconjunto inicial até que nenhuma informação possa mais ser absorvida. Uma das vantagens da utilização desse algoritmo é que ele combate os fenômenos do mascaramento e do esmagamento que alteram as estimativas da média e da covariância. Os resultados da pesquisa mostraram que, para a o conjunto de dados estudados, o algoritmo BACON não detectou nenhum ponto discordante. Uma simulação foi então desenvolvida, utilizando uma distribuição uniforme através da obtenção de números aleatórios dentro de um intervalo para a modificação dos valores da média e do desvio-padrão, a fim de mostrar que tal método é eficaz na detecção desses pontos aberrantes. Para essa simulação, foram alterados aleatoriamente os valores da média e do desvio-padrão de 5% dos dados originais. O resultado dessa simulação mostrou que o algoritmo BACON é perfeitamente aplicável ao caso estudado, sendo indicada a sua utilização em outros processos produtivos que dependam simultaneamente de diversas variáveis.
370

O dom?nio da leitura e da escrita : por que n?o eu?

Moreira, Jos? Leonides 10 December 2008 (has links)
Made available in DSpace on 2014-12-17T14:19:47Z (GMT). No. of bitstreams: 1 JoseLM.pdf: 404421 bytes, checksum: 1f5c51fb36053ff54c71601726979af6 (MD5) Previous issue date: 2008-12-10 / The study and research field of Education is wide and rich, mainly when it goes towards the empirical area of social reality. This research focuses on young and adult subjects who cannot read or write, although they had had access to and attended schools in Natal/RN. The locus of the research are the Municipal Schools that develop the Youth and Adults Education program EJA, having representatives from the North, South, East and West zones of the city, in a total of 6 municipal schools. It analyzes these subjects' replies to the questions: "Why are there young and adults who attended school but still cannot read or write?", What are the exclusion situations they face by not being able to read or write?". From a dialectic view on the subject, the research's strategy for data collection is the semi-structured interview to collect the replies given by the interviewees; replies that are separated by analysis categories presented charts of ideas. The research's results are analyzed and lead us to the conclusion that the affective, organic, cognitive, social, political and pedagogical factors are mentioned by the subjects as reasons why they can not dominate the reading or writing skills. The youth and adults interviewed are not happy with their school failure; the reading and writing learning is something that eases their social inclusion into a society that privileges such abilities, and that with it they could avoid the social exclusion they faced at school, in the work place, at home, in church, at health centers, on the street, at their children's school and in public assistance institutions / O campo de pesquisa e estudos da educa??o ? vasto e rico, principalmente quando adentra-se no campo emp?rico da realidade social. Nessa investiga??o o corpus de aten??o s?o os sujeitos jovens e adultos que n?o dominam a leitura e a escrita, embora tenham tido acesso e freq?entado o espa?o escolar no munic?pio de Natal/RN. O l?cus da pesquisa s?o as Escolas da Rede Municipal de Ensino que desenvolvem a Educa??o de Jovens e Adultos EJA, representantes das zonas Norte, Sul, Leste e Oeste da cidade, perfazendo um total de seis escolas da Rede Municipal de ensino, pertencentes a diferentes regi?es da capital potiguar. Analisam-se as vozes dos sujeitos ao responderem as quest?es Por que existem jovens e adultos escolarizados sem o dom?nio da leitura e da escrita ? Quais as situa??es de exclus?o enfrentadas pelo n?o dom?nio da leitura e da escrita entre os jovens e adultos pesquisados ? A partir de uma vis?o dial?tica do assunto, a estrat?gia de pesquisa para a coleta de dados ? a entrevista semi-estruturada para apreender a voz dos sujeitos da pesquisa, as quais s?o divididas em categorias de an?lise apresentadas em um mapa das id?ias. Analisam-se os resultados da investiga??o, concluindo-se que fatores afetivos, org?nicos, cognitivos, sociais, pol?ticos e pedag?gicos s?o citados pelos sujeitos como motivos para o n?o dom?nio da leitura e da escrita. Os jovens e adultos entrevistados n?o se sentem satisfeitos com o fracasso escolar; a aprendizagem da leitura e da escrita ? um dom?nio que facilita a inclus?o social em uma sociedade que privilegia tais habilidades, podendo minimizar as situa??es de exclus?o social que os mesmos enfrentaram no espa?o escolar, no ambiente de trabalho e de emprego, em casa, na igreja, nos postos de atendimento ? sa?de, na rua, na escola dos filhos e em institui??o de atendimento p?blico, por causa da n?o aprendizagem

Page generated in 0.0739 seconds