• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 45
  • 29
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 460
  • 73
  • 56
  • 55
  • 47
  • 40
  • 39
  • 37
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Classification of RNA Pseudoknots and Comparison of Structure Prediction Methods / Classification de Pseudo-nœuds d'ARN et Comparaison de Méthodes de Prédiction de Structure

Zeng, Cong 03 July 2015 (has links)
De nombreuses recherches ont constaté l'importance des molécules d'ARN, car ils jouent un rôle vital dans beaucoup de procédures moléculaires. Et il est accepté généralement que les structures des molécules d'ARN sont la clé de la découverte de leurs fonctions.Au cours de l'enquête de structures d'ARN, les chercheurs dépendent des méthodes bioinformatiques de plus en plus. Beaucoup de méthodes in silico de prédiction des structures secondaires d'ARN ont émergé dans cette grosse vague, y compris certains qui sont capables de prédire pseudonoeuds, un type particulier de structures secondaires d'ARN.Le but de ce travail est d'essayer de comparer les méthodes de l'état de l'art pour prédiction de pseudonoeud, et offrir aux collègues des idées sur le choix d’une méthode pratique pour la seule séquence donnée. En fait, beaucoup d'efforts ont été fait dans la prédiction des structures secondaires d'ARN parmi lesquelles le pseudonoeud les dernières décennies, contribuant à de nombreux programmes dans ce domaine. Certaines enjeux sont soulevées conséquemment. Comment est-elle la performance de chaque méthode, en particulier sur une classe de séquences d'ARN particulière? Quels sont leurs pour et contre? Que pout-on profiter des méthodes contemporaines si on veut développer de nouvelles? Cette thèse a la confiance dans l'enquête sur les réponses.Cette thèse porte sur très nombreuses comparaisons de la performance de prédire pseudonoeuds d'ARN par les méthodes disponibles. Une partie principale se concentre sur la prédiction de signaux de déphasage par deux méthodes principalement. La deuxième partie principale se concentre sur la prédiction de pseudonoeuds qui participent à des activités moléculaires beaucoup plus générale.Dans le détail, la deuxième partie du travail comprend 414 pseudonoeuds de Pseudobase et de la Protein Data Bank, ainsi que 15 méthodes dont 3 méthodes exactes et 12 heuristiques. Plus précisément, trois grandes catégories de mesures complexes sont introduites, qui divisent encore les 414 pseudonoeuds en une série de sous-classes respectivement.Les comparaisons se passent par comparer les prédictions de chaque méthode basée sur l'ensemble des 414 pseudonœuds, et les sous-ensembles qui sont classés par les deux mesures complexes et la longueur, le type de l'ARN et de l'organisme des pseudonœuds.Le résultat montre que les pseudo-noeuds portent une complexité relativement faible dans toutes les mesures. Et la performance des méthodes modernes varie de sous-classe à l’autre, mais diminue constamment lors que la complexité de pseudonoeuds augmente. Plus généralement, les méthodes heuristiques sont supérieurs globalement à celles exacts. Et les résultats de l'évaluation sensibles sont dépendants fortement de la qualité de structure de référence et le système d'évaluation. Enfin, cette partie du travail est fourni comme une référence en ligne pour la communauté bioinformatique. / Lots of researches convey the importance of the RNA molecules, as they play vital roles in many molecular procedures. And it is commonly believed that the structures of the RNA molecules hold the key to the discovery of their functions.During the investigation of RNA structures, the researchers are dependent on the bioinformatical methods increasingly. Many in silico methods of predicting RNA secondary structures have emerged in this big wave, including some ones which are capable of predicting pseudoknots, a particular type of RNA secondary structures.The purpose of this dissertation is to try to compare the state-of-the-art methods predicting pseudoknots, and offer the colleagues some insights into how to choose a practical method for the given single sequence. In fact, lots of efforts have been done into the prediction of RNA secondary structures including pseudoknots during the last decades, contributing to many programs in this field. Some challenging questions are raised consequently. How about the performance of each method, especially on a particular class of RNA sequences? What are their advantages and disadvantages? What can we benefit from the contemporary methods if we want to develop new ones? This dissertation holds the confidence in the investigation of the answers.This dissertation carries out quite many comparisons of the performance of predicting RNA pseudoknots by the available methods. One main part focuses on the prediction of frameshifting signals by two methods principally. The second main part focuses on the prediction of pseudoknots which participate in much more general molecular activities.In detail, the second part of work includes 414 pseudoknots, from both the Pseudobase and the Protein Data Bank, and 15 methods including 3 exact methods and 12 heuristic ones. Specifically, three main categories of complexity measurements are introduced, which further divide the 414 pseudoknots into a series of subclasses respectively. The comparisons are carried out by comparing the predictions of each method based on the entire 414 pseudoknots, and the subsets which are classified by both the complexity measurements and the length, RNA type and organism of the pseudoknots.The result shows that the pseudoknots in nature hold a relatively low complexity in all measurements. And the performance of contemporary methods varies from subclass to subclass, but decreases consistently as the complexity of pseudoknots increases. More generally, the heuristic methods globally outperform the exact ones. And the susceptible assessment results are dependent strongly on the quality of the reference structures and the evaluation system. Last but not least, this part of work is provided as an on-line benchmark for the bioinformatics community.
142

Avaliação do Star Schema Benchmark aplicado a bancos de dados NoSQL distribuídos e orientados a colunas / Evaluation of the Star Schema Benchmark applied to NoSQL column-oriented distributed databases systems

Lucas de Carvalho Scabora 06 May 2016 (has links)
Com o crescimento do volume de dados manipulado por aplicações de data warehousing, soluções centralizadas tornam-se muito custosas e enfrentam dificuldades para tratar a escalabilidade do volume de dados. Nesse sentido, existe a necessidade tanto de se armazenar grandes volumes de dados quanto de se realizar consultas analíticas (ou seja, consultas OLAP) sobre esses dados volumosos de forma eficiente. Isso pode ser facilitado por cenários caracterizados pelo uso de bancos de dados NoSQL gerenciados em ambientes paralelos e distribuídos. Dentre os desafios relacionados a esses cenários, destaca-se a necessidade de se promover uma análise de desempenho de aplicações de data warehousing que armazenam os dados do data warehouse (DW) em bancos de dados NoSQL orientados a colunas. A análise experimental e padronizada de diferentes sistemas é realizada por meio de ferramentas denominadas benchmarks. Entretanto, benchmarks para DW foram desenvolvidos majoritariamente para bancos de dados relacionais e ambientes centralizados. Nesta pesquisa de mestrado são investigadas formas de se estender o Star Schema Benchmark (SSB), um benchmark de DW centralizado, para o banco de dados NoSQL distribuído e orientado a colunas HBase. São realizadas propostas e análises principalmente baseadas em testes de desempenho experimentais considerando cada uma das quatro etapas de um benchmark, ou seja, esquema e carga de trabalho, geração de dados, parâmetros e métricas, e validação. Os principais resultados obtidos pelo desenvolvimento do trabalho são: (i) proposta do esquema FactDate, o qual otimiza consultas que acessam poucas dimensões do DW; (ii) investigação da aplicabilidade de diferentes esquemas a cenários empresariais distintos; (iii) proposta de duas consultas adicionais à carga de trabalho do SSB; (iv) análise da distribuição dos dados gerados pelo SSB, verificando se os dados agregados pelas consultas OLAP estão balanceados entre os nós de um cluster; (v) investigação da influência de três importantes parâmetros do framework Hadoop MapReduce no processamento de consultas OLAP; (vi) avaliação da relação entre o desempenho de consultas OLAP e a quantidade de nós que compõem um cluster; e (vii) proposta do uso de visões materializadas hierárquicas, por meio do framework Spark, para otimizar o desempenho no processamento de consultas OLAP consecutivas que requerem a análise de dados em níveis progressivamente mais ou menos detalhados. Os resultados obtidos representam descobertas importantes que visam possibilitar a proposta futura de um benchmark para DWs armazenados em bancos de dados NoSQL dentro de ambientes paralelos e distribuídos. / Due to the explosive increase in data volume, centralized data warehousing applications become very costly and are facing several problems to deal with data scalability. This is related to the fact that these applications need to store huge volumes of data and to perform analytical queries (i.e., OLAP queries) against these voluminous data efficiently. One solution is to employ scenarios characterized by the use of NoSQL databases managed in parallel and distributed environments. Among the challenges related to these scenarios, there is a need to investigate the performance of data warehousing applications that store the data warehouse (DW) in column-oriented NoSQL databases. In this context, benchmarks are widely used to perform standard and experimental analysis of distinct systems. However, most of the benchmarks for DW focus on relational database systems and centralized environments. In this masters research, we investigate how to extend the Star Schema Benchmark (SSB), which was proposed for centralized DWs, to the distributed and column-oriented NoSQL database HBase. We introduce proposals and analysis mainly based on experimental performance tests considering each one of the four steps of a benchmark, i.e. schema and workload, data generation, parameters and metrics, and validation. The main results described in this masters research are described as follows: (i) proposal of the FactDate schema, which optimizes queries that access few dimensions of the DW; (ii) investigation of the applicability of different schemas for different business scenarios; (iii) proposal of two additional queries to the SSB workload; (iv) analysis of the data distribution generated by the SSB, verifying if the data aggregated by OLAP queries are balanced between the nodes of a cluster; (v) investigation of the influence caused by three important parameters of the Hadoop MapReduce framework in the OLAP query processing; (vi) evaluation of the relationship between the OLAP query performance and the number of nodes of a cluster; and (vii) employment of hierarchical materialized views using the Spark framework to optimize the processing performance of consecutive OLAP queries that require progressively more or less aggregated data. These results represent important findings that enable the future proposal of a benchmark for DWs stored in NoSQL databases and managed in parallel and distributed environments.
143

The index reconstruction effect : An event study on the OMX Stockholm Benchmark Index

Askeljung, Love January 2021 (has links)
Background. Due to prevailing technological development, telecommunication and computers have become very advanced. This has had a tremendous effect on the financial markets as well, various facilitating financial means have become much more common. One of such is passively managed index funds which does not only use index as a benchmark but also trade the stocks in the index. Thus, guaranteeing the fund a return equal to the market return and to a lower cost than an equally good actively managed fund. Index funds have in recent times increased in popularity, which has left its mark. The price of the stocks included to and excluded from an index has been observed to respectively increase and decrease in value.   Research on the price effects caused by index revision, or the effects that inclusion and exclusion have on the price of underlying shares, has been around since the 1980s. In the literature, it is generally accepted that inclusion to an index results in a positive price development, while exclusion results in a negative price development. However, the literature does not agree on whether the price effects are long-term or short-term. The disagreement began with the first studies in the field, where one author found that the price effects were long-lasting, even permanent. The others, on the other hand, found that the price effects were short-lived and returned to their original value when the trading ceased. Subsequent research is equally inconsistent. Some studies have found temporary changes, and some have found permanent ones. In this uncertainty, different theories of explanation have also been presented for the different outcomes, but these do not agree either. Objectives. To bring some clarity to the problems within the literature, the purpose of this study is to investigate the stock price effects from the reconstruction of a Swedish market index, with consideration of whether the effects are temporary or permanent. Methods. This study applied the event study methodology and the market model to examine the abnormal return found around the announcement day and the changing day. The study is based on 195 stocks that were included to and excluded from the OMX Stockholm Benchmark Index between the years 2009 and 2019.  Results. This study did not find any statistically significant price change in the period before the announcement date. However, there were indications that the announcement day did have a positive effect on the included stocks and a negative effect on the excluded stocks. But the time after the announcement day and prior to the changing day did not show any statistically significant price changes. The changing day and the period after were both found to be negatively significant for inclusions. Thus, indicating a negative price effect on the day of inclusion and the period that followed. These results are consistent with previous studies that have found a price drop on the changing day and the following period. A further test of the relationship between the abnormal return found on the announcement day and the changing day revealed that the price increase was concentrated to the announcement day. A possible explanation for this outcome may be that index funds that trade on the Swedish exchange have recognized the opportunity to trade closer to the announcement day without incurring any losses and acted accordingly. Regarding the exclusions, the changing days were not found to be statistically significant. Neither did the period following the changing day show any statistical significance. This result could be due to a delayed reaction to the changing day, given that this group showed a slow reaction to the announcement day as well. Both the announcement day and the day after the announcement day were statistically significant at the 1% level. Other possible causes for the deviating results are errors in execution or data. Conclusions. The result of this study is consistent with other studies that find a temporary price reaction to the index reconstruction.
144

Seminar Hochleistungsrechnen und Benchmarking: x264-Encoder als Benchmark

Naumann, Stefan January 2014 (has links)
Bei der modernen Videoencodierung werden viele Berechnungen benötigt. Unter anderem wird das Bild in Makroblöcke zerlegt, Bewegungsvektoren berechnet und Bewegungsvorhersagen getroffen, um Speicherplatz für die komprimierte Datei zu sparen. Der x264-Encoder versucht das auf verschiedene Arten und Weisen zu realisieren, wodurch der eigentliche Encodier-Vorgang langsam wird und auf älteren oder langsameren PCs deutlich länger dauert als andere Verfahren. Außerdem verwendet der x264-Encoder Standards wie SSE, AVX oder OpenCL um Zeit zu sparen, indem mehrere Daten gleichzeitig berechnet werden. Daher eignet sich x264 auch zur Evaluation solcher Standards und der Untersuchung des Geschwindigkeitsgewinns durch die Verwendung von Vektoroperationen oder Grafikbeschleunigung.
145

Benchmarking och Utvärdering av Protokollet QUIC : En jämförelse av QUIC gentemot TCP

Ekberg, Adam, Tedengren, Ivan January 2017 (has links)
Since 2012 Google has been developing a new transport protocol called QUIC (Quick UDP Internet Connections). The purpose of the QUIC-protocol is to speed up the web and first of all produce lower response time on websites. This is interesting in several perspectives. First of all, this is good news for the common user that browse the web but also in an economical perspective. Studies show that quicker response time on websites attracts customers both short term and long term which is important in areas as e-commerce. On top of this the Internet alone (home computers, data centers etc.) stands for about 10% of the worlds electricity consumption and a quicker and more effective transport protocol could contribute to lower this number since a lot of data is transferred through the Internet each day. QUIC is already in use by many of Google´s servers and can be used when browsing the web in a chrome or Opera browser. This means that many people have already been in touch with QUIC unknowingly. This degree project focuses on the main problems which makes the QUICprotocol needed and compares QUIC to TCP. TCP has been the dominating transport protocol regarding reliable data transmission for decades and still is. In this project an environment for testing is implemented which makes it possible to compare response time for websites. Two different tests are made where different common internet conditions are simulated to see how these conditions effects the response time for each protocol. The tests have shown that QUIC and TCP are pretty much equal regarding response time when the delay is 100 ms or less and there is no packet loss. When the delay exceeds 100 ms have our tests shown that QUIC delivers quicker response times. The tests have also shown that QUIC is superior to TCP when data is transferred over a connection with packet losses. Although it can be questioned if we could have optimized our TCP-server to compete with QUIC in a better way. / Google utvecklar sedan 2012 ett nytt pålitligt transportprotokoll, QUIC (Quick UDP Internet Connections). Syftet med detta är att göra webben ”snabbare” genom att bland annat minska svarstider för hemsidor. Detta är intressant ur en mängd perspektiv. Dels ur användarsynpunkt vid surf på webben men även ur ett rent ekonomiskt perspektiv då forskning visar att snabbare hemsidor lockar fler kunder både på kort och lång sikt vilket är intressant inom t ex. ehandel. Dessutom beräknas Internet stå för ungefär 10% av all elkonsumtion på hela planeten och ett snabbare och effektivare transportprotokoll kan förhoppningsvis bidra till att förbättra den siffran. QUIC används redan idag på flera av Googles egna servrar och uppkopplad mot Internet med webbläsaren Chrome eller Opera har användaren med stor sannolikhet redan stött på QUIC utan att veta om det. Detta arbete fokuserar på några av de problem som ligger som grund för vad QUIC är tänkt att förbättra och jämförs sedan med transportprotokollet TCP som har varit standardprotokollet för pålitlig dataöverföring i decennier. I arbetet upprättas en testmiljö som gör det möjligt att mäta svarstider på en webbklient för de olika protokollen vid olika simulerade förhållanden. Testerna går ut på att variera fördröjning och paketförluster för att se hur detta påverkar svarstiderna för respektive protokoll. Jämförelsen har resulterat i att QUIC och TCP är jämna i avseende på svarstider då inga paketförluster förekommer och fördröjningen är 100 ms eller lägre. Däremot när fördröjningen ökar till en nivå över den genomsnittliga fördröjningen överstiger 100 ms så pekar våra tester på att QUIC levererar snabbare svarstider. Dessutom har testerna visat att QUIC är överlägset TCP gällande svarstider då paketförluster förekommer. Det kan dock ifrågasättas huruvida vår TCP-server hade kunnat optimerats för hålla jämnare steg med QUIC.
146

Java jämfört med C#, vilken sorterar snabbast på Raspberry Pi? / Java compared to C#, which sorts fastest on Raspberry Pi?

Olofsson, Christoffer January 2015 (has links)
I denna studie skall Java och C# ställas mot varandra och köras på en Raspberry Pi för att se vilken av dem som kan sortera heltalsvektorer snabbast. Som Java-motor kommer Hot-Spot att användas och Mono för C# och de ska sortera vektorer med sorteringsalgoritmer från språkens stödbibliotek och en implementerad algoritm baserad på urvalssortering. Detta arbete är till för att dem som vill arbeta med ett objektorienterat språk på Raspberry Pi, men inte har bestämt sig än för vilket som skall användas. Resultatet visar att Java presterar bättre än C# i de flesta fall och att det finns undantag då C# presterar bättre. / In this study, Java and C# is set against each other and running on a Raspberry Pi to see if they have similar processing times, or if there is a clear difference between the two languages. As Java-engine HotSpot will be used and Mono for C# and they will sort vectors with sorting algorithms from the language's support library and one implemented algorithm based on selection sort. This work is for those who want to work with an object-oriented language on Raspberry Pi, but has not decided yet on which one to choose. The result shows that Java performs better than C# in most cases, but in some cases C# is performing better.
147

Bayesian and Frequentist Approaches for the Analysis of Multiple Endpoints Data Resulting from Exposure to Multiple Health Stressors.

Nyirabahizi, Epiphanie 08 March 2010 (has links)
In risk analysis, Benchmark dose (BMD)methodology is used to quantify the risk associated with exposure to stressors such as environmental chemicals. It consists of fitting a mathematical model to the exposure data and the BMD is the dose expected to result in a pre-specified response or benchmark response (BMR). Most available exposure data are from single chemical exposure, but living objects are exposed to multiple sources of hazards. Furthermore, in some studies, researchers may observe multiple endpoints on one subject. Statistical approaches to address multiple endpoints problem can be partitioned into a dimension reduction group and a dimension preservative group. Composite scores using desirability function is used, as a dimension reduction method, to evaluate neurotoxicity effects of a mixture of five organophosphate pesticides (OP) at a fixed mixing ratio ray, and five endpoints were observed. Then, a Bayesian hierarchical model approach, as a single unifying dimension preservative method is introduced to evaluate the risk associated with the exposure to mixtures chemicals. At a pre-specied vector of BMR of interest, the method estimates a tolerable area referred to as benchmark dose tolerable area (BMDTA) in multidimensional Euclidean plan. Endpoints defining the BMDTA are determined and model uncertainty and model selection problems are addressed by using the Bayesian Model Averaging (BMA) method.
148

Transport av styckegods på järnväg: en utredande studie / Transportation of break-bulk cargo on railway: an investigating stydy

Häggblom, Linnea, Norman, Mikael January 2017 (has links)
In today´s society large quantities of goods are shipped both domestic in Sweden as well as across borders. The increasing flow of goods require higher demands on the capacity of the infrastructure and at the same time think and act for a sustainable environment. The overall aim of the study is to highlight development areas for cargo freight by railway. The more specific goals of the study is to identify what enablers that is needed when to establish an intermodal terminal that will handle break-bulk cargo directly from railway, and what barriers that might be. To achieve the aim and goal a qualitative study was conducted with interviews as primal data. Studying the market prerequisites and conducting a competitive intelligence and a benchmark conducted the result. In the result, the interviews and the examined documents has ben compiled, these data has been the basis for the analysis and conclusion. Large parts of the collected data indicated that break bulk cargo handling in intermodal terminals is currently something that is not offered and it is not considered to be economically viable. However, the study also revealed that break bulk cargo handling is a desired service from the business sector as it is regarded as an environmentally friendly mode of transportation, something that the business sector appreciate. To cope with this kind of cargo handling and transportation more research is needed, better cooperation between the private- and the public sector as well as infrastructure changes. / Idag fraktas stora mängder gods både inom Sverige och över landsgränserna. Det ökande godsflödet ställer högre krav på infrastrukturens kapacitet samtidigt som det blir allt viktigare med transporter som är hållbara ur miljösynpunkt. Studiens övergripande syfte är att belysa utvecklingsmöjligheter av godstransport på järnväg. Mer specifika mål för studien är att identifiera vilka förutsättningar som bör finnas vid etablering av kombiterminal med styckegodshantering direkt från räls samt vilka barriärer det finns mot det. För att uppnå undersökningens syfte och mål genomfördes en kvalitativ studie med intervjuer som primärdata. Genom att studera marknadsförutsättningar, genomföra en omvärldsbevakning och en benchmark utformades resultatet. I resultatet har intervjuerna tillsammans med den granskade dokumentationen sammanställts, dessa data har sedan legat som grund till analysen och slutsatsen. Stora delar av insamlade data pekade mot att styckegodshantering på kombiterminal i dagens läge inte är något som erbjuds samt att det inte anses vara ekonomiskt hållbart att hantera styckegods på kombiterminal. Dock visade undersökningen att styckegodshantering på kombiterminal är en önskad tjänst från näringslivet då det anses vara ett miljövänligt transportsätt för gods, något som näringslivet värdesätter. För att klara av denna typ av hantering och transport krävs mer forskning, bättre samarbete mellan den privata och offentliga sektorn samt infrastrukturella förändringar.
149

Uma formulação por média-variância multi-período para o erro de rastreamento em carteiras de investimento. / A multi-period mean-variance formulation of tracking error for portfolio selection.

Zabala, Yeison Andres 24 February 2016 (has links)
Neste trabalho, deriva-se uma política de escolha ótima baseada na análise de média-variância para o Erro de Rastreamento no cenário Multi-período - ERM -. Referindo-se ao ERM como a diferença entre o capital acumulado pela carteira escolhida e o acumulado pela carteira de um benchmark. Assim, foi aplicada a metodologia abordada por Li-Ng em [24] para a solução analítica, obtendo-se dessa maneira uma generalização do caso uniperíodo introduzido por Roll em [38]. Em seguida, selecionou-se um portfólio do mercado de ações brasileiro baseado no fator de orrelação, e adotou-se como benchmark o índice da bolsa de valores do estado de São Paulo IBOVESPA, além da taxa básica de juros SELIC como ativo de renda fixa. Dois casos foram abordados: carteira composta somente de ativos de risco, caso I, e carteira com um ativo sem risco indexado à SELIC - e ativos do caso I (caso II). / In this work, an optimal policy for portfolio selection based on mean-varian e analysis for the multi-period tracking error - ERM - was derived. ERM is understood as the difference between the capital raised by the selected portfolio and benchmark portfolio. Thus, the methodology discussed by Li-Ng in [24] for analytical solution was applied, generalizing the single period case introduced by Roll in [38]. Then, it was selected a portfolio from the Brazilian stock trading based on the correlation factor, and adopted as benchmark the index of the stock trading of São Paulo State IBOVESPA, and the basic interest rate SELIC as fixed income asset. Two cases were dealt: portfolio composed of risky assets only, case I, and portfolio with a risk-free asset - indexed to SELIC - and assets of the case I (case II).
150

Os programas de melhoria realmente importam?: uma avaliação em uma empresa de manufatura

Souza, Iberê Guarani de 31 January 2014 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-07-15T14:12:15Z No. of bitstreams: 1 Iberê Guarani de Souza.pdf: 3328129 bytes, checksum: 6f2250512f214402473b148d4ad6e48d (MD5) / Made available in DSpace on 2015-07-15T14:12:15Z (GMT). No. of bitstreams: 1 Iberê Guarani de Souza.pdf: 3328129 bytes, checksum: 6f2250512f214402473b148d4ad6e48d (MD5) Previous issue date: 2014-01-31 / Nenhuma / As empresas buscam, a cada dia, melhorar sua eficiência produtiva e, com isso, aumentar sua lucratividade e competitividade. Para tanto, há diversas formas de descobrir os fatores críticos de competitividade que podem estar presentes nos mais diversos setores de manufatura. Logo, o uso de técnicas robustas para avaliar e medir esses fatores torna-se essencial para o suporte à tomada de decisão. Este estudo tem o objetivo de analisar a influência dos processos de melhoria contínua e de aprendizado em termos de eficiência e volume de produção em uma empresa de manufatura. Para atingir o objetivo proposto, a pesquisa realiza um estudo de caso utilizando a Análise Envoltória de Dados (DEA), combinada com o teste de Regressão Linear e o teste de ANOVA. Nesta etapa, formula-se um modelo conceitual com quatro hipóteses principais e oito hipóteses secundárias. Para avaliação da eficiência DEA, o modelo utiliza retornos variáveis de escala (VSR) com orientação a insumo, considerando as principais matérias-primas utilizadas pela empresa com base no custo variável total. O teste de Regressão Linear efetua a avaliação do impacto do processo de melhoria e de aprendizado na eficiência (DEA). Por sua vez, o teste de ANOVA avalia as médias da eficiência de cada linha de produção para cada ano analisado. O estudo realiza-se de forma longitudinal, com avaliação de seis anos de produção de manufatura. Os resultados da pesquisa mostram que apenas uma das linhas de produção aumentou a eficiência ao longo do tempo. Além disso, indicam que duas linhas de produção tiveram impacto das ações de melhoria no volume de produção. Logo, as variáveis referentes aos programas de Kaizen, às horas de treinamento e à experiência dos funcionários influenciaram significativamente o modelo. Verifica-se que os projetos voltados à melhoria contínua e ao aprendizado não foram suficientes para aumentar a eficiência em duas importantes linhas de produção. Além disso, o estudo elucida que o volume de produção impacta negativamente a eficiência de uma das linhas de produção. Com a análise, é possível identificar quais fatores são representativos para aumentar a eficiência produtiva. Logo, conclui-se que a atualização tecnológica constitui um fator importante a ser seguido pela empresa estudada. / Everyday, companies seek to improve their productive efficiency and, thus, increase their profitability and competitiveness. For both, there are several ways to discover the critical factors of competitiveness that may be present in various manufacturing sectors. Thus, the use of robust techniques to assess and measure these factors is essential to support decision making. This study aims to analyze the influence of the processes of continuous improvement and learning in terms of efficiency and production volume in a manufacturing company. To achieve the proposed objective, the research conducts a case study using Data Envelopment Analysis (DEA), combined with the Linear Regression test and the ANOVA test. At this stage, a conceptual model with four main hypotheses and eight secondary ones is formulated. To evaluate the DEA efficiency, the model uses Variable Scale Returns (VSR) with input orientation considering the main raw materials used by the company based on the total variable cost. The Linear Regression test performs the evaluation of the impact of the improvement process and learning efficiency (DEA). In turn, the ANOVA test evaluates the average efficiency of each production line for each year analyzed. The study is carried out longitudinally, by reviewing six years of manufacturing. The survey results show that only one of the production lines increased efficiency over time. In addition, the results indicate that two production lines have been impacted by the actions of improvement in the volume of production. Therefore, the variables related to Kaizen programs, to the hours of training and to employees ́ experience significantly influenced the model. It can be concluded that the projects focused on continuous improvement and learning were not sufficient to increase efficiency in two major production lines. Furthermore, the study shows that the production volume negatively impacts the efficiency of the production lines. With the analysis, it is possible to identify which factors are representative to increase production efficiency. Therefore, it can be conclude that the technology upgrade is an important factor to be followed by the company studied.

Page generated in 0.0682 seconds