• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

The index reconstruction effect : An event study on the OMX Stockholm Benchmark Index

Askeljung, Love January 2021 (has links)
Background. Due to prevailing technological development, telecommunication and computers have become very advanced. This has had a tremendous effect on the financial markets as well, various facilitating financial means have become much more common. One of such is passively managed index funds which does not only use index as a benchmark but also trade the stocks in the index. Thus, guaranteeing the fund a return equal to the market return and to a lower cost than an equally good actively managed fund. Index funds have in recent times increased in popularity, which has left its mark. The price of the stocks included to and excluded from an index has been observed to respectively increase and decrease in value.   Research on the price effects caused by index revision, or the effects that inclusion and exclusion have on the price of underlying shares, has been around since the 1980s. In the literature, it is generally accepted that inclusion to an index results in a positive price development, while exclusion results in a negative price development. However, the literature does not agree on whether the price effects are long-term or short-term. The disagreement began with the first studies in the field, where one author found that the price effects were long-lasting, even permanent. The others, on the other hand, found that the price effects were short-lived and returned to their original value when the trading ceased. Subsequent research is equally inconsistent. Some studies have found temporary changes, and some have found permanent ones. In this uncertainty, different theories of explanation have also been presented for the different outcomes, but these do not agree either. Objectives. To bring some clarity to the problems within the literature, the purpose of this study is to investigate the stock price effects from the reconstruction of a Swedish market index, with consideration of whether the effects are temporary or permanent. Methods. This study applied the event study methodology and the market model to examine the abnormal return found around the announcement day and the changing day. The study is based on 195 stocks that were included to and excluded from the OMX Stockholm Benchmark Index between the years 2009 and 2019.  Results. This study did not find any statistically significant price change in the period before the announcement date. However, there were indications that the announcement day did have a positive effect on the included stocks and a negative effect on the excluded stocks. But the time after the announcement day and prior to the changing day did not show any statistically significant price changes. The changing day and the period after were both found to be negatively significant for inclusions. Thus, indicating a negative price effect on the day of inclusion and the period that followed. These results are consistent with previous studies that have found a price drop on the changing day and the following period. A further test of the relationship between the abnormal return found on the announcement day and the changing day revealed that the price increase was concentrated to the announcement day. A possible explanation for this outcome may be that index funds that trade on the Swedish exchange have recognized the opportunity to trade closer to the announcement day without incurring any losses and acted accordingly. Regarding the exclusions, the changing days were not found to be statistically significant. Neither did the period following the changing day show any statistical significance. This result could be due to a delayed reaction to the changing day, given that this group showed a slow reaction to the announcement day as well. Both the announcement day and the day after the announcement day were statistically significant at the 1% level. Other possible causes for the deviating results are errors in execution or data. Conclusions. The result of this study is consistent with other studies that find a temporary price reaction to the index reconstruction.
142

Seminar Hochleistungsrechnen und Benchmarking: x264-Encoder als Benchmark

Naumann, Stefan January 2014 (has links)
Bei der modernen Videoencodierung werden viele Berechnungen benötigt. Unter anderem wird das Bild in Makroblöcke zerlegt, Bewegungsvektoren berechnet und Bewegungsvorhersagen getroffen, um Speicherplatz für die komprimierte Datei zu sparen. Der x264-Encoder versucht das auf verschiedene Arten und Weisen zu realisieren, wodurch der eigentliche Encodier-Vorgang langsam wird und auf älteren oder langsameren PCs deutlich länger dauert als andere Verfahren. Außerdem verwendet der x264-Encoder Standards wie SSE, AVX oder OpenCL um Zeit zu sparen, indem mehrere Daten gleichzeitig berechnet werden. Daher eignet sich x264 auch zur Evaluation solcher Standards und der Untersuchung des Geschwindigkeitsgewinns durch die Verwendung von Vektoroperationen oder Grafikbeschleunigung.
143

Benchmarking och Utvärdering av Protokollet QUIC : En jämförelse av QUIC gentemot TCP

Ekberg, Adam, Tedengren, Ivan January 2017 (has links)
Since 2012 Google has been developing a new transport protocol called QUIC (Quick UDP Internet Connections). The purpose of the QUIC-protocol is to speed up the web and first of all produce lower response time on websites. This is interesting in several perspectives. First of all, this is good news for the common user that browse the web but also in an economical perspective. Studies show that quicker response time on websites attracts customers both short term and long term which is important in areas as e-commerce. On top of this the Internet alone (home computers, data centers etc.) stands for about 10% of the worlds electricity consumption and a quicker and more effective transport protocol could contribute to lower this number since a lot of data is transferred through the Internet each day. QUIC is already in use by many of Google´s servers and can be used when browsing the web in a chrome or Opera browser. This means that many people have already been in touch with QUIC unknowingly. This degree project focuses on the main problems which makes the QUICprotocol needed and compares QUIC to TCP. TCP has been the dominating transport protocol regarding reliable data transmission for decades and still is. In this project an environment for testing is implemented which makes it possible to compare response time for websites. Two different tests are made where different common internet conditions are simulated to see how these conditions effects the response time for each protocol. The tests have shown that QUIC and TCP are pretty much equal regarding response time when the delay is 100 ms or less and there is no packet loss. When the delay exceeds 100 ms have our tests shown that QUIC delivers quicker response times. The tests have also shown that QUIC is superior to TCP when data is transferred over a connection with packet losses. Although it can be questioned if we could have optimized our TCP-server to compete with QUIC in a better way. / Google utvecklar sedan 2012 ett nytt pålitligt transportprotokoll, QUIC (Quick UDP Internet Connections). Syftet med detta är att göra webben ”snabbare” genom att bland annat minska svarstider för hemsidor. Detta är intressant ur en mängd perspektiv. Dels ur användarsynpunkt vid surf på webben men även ur ett rent ekonomiskt perspektiv då forskning visar att snabbare hemsidor lockar fler kunder både på kort och lång sikt vilket är intressant inom t ex. ehandel. Dessutom beräknas Internet stå för ungefär 10% av all elkonsumtion på hela planeten och ett snabbare och effektivare transportprotokoll kan förhoppningsvis bidra till att förbättra den siffran. QUIC används redan idag på flera av Googles egna servrar och uppkopplad mot Internet med webbläsaren Chrome eller Opera har användaren med stor sannolikhet redan stött på QUIC utan att veta om det. Detta arbete fokuserar på några av de problem som ligger som grund för vad QUIC är tänkt att förbättra och jämförs sedan med transportprotokollet TCP som har varit standardprotokollet för pålitlig dataöverföring i decennier. I arbetet upprättas en testmiljö som gör det möjligt att mäta svarstider på en webbklient för de olika protokollen vid olika simulerade förhållanden. Testerna går ut på att variera fördröjning och paketförluster för att se hur detta påverkar svarstiderna för respektive protokoll. Jämförelsen har resulterat i att QUIC och TCP är jämna i avseende på svarstider då inga paketförluster förekommer och fördröjningen är 100 ms eller lägre. Däremot när fördröjningen ökar till en nivå över den genomsnittliga fördröjningen överstiger 100 ms så pekar våra tester på att QUIC levererar snabbare svarstider. Dessutom har testerna visat att QUIC är överlägset TCP gällande svarstider då paketförluster förekommer. Det kan dock ifrågasättas huruvida vår TCP-server hade kunnat optimerats för hålla jämnare steg med QUIC.
144

Java jämfört med C#, vilken sorterar snabbast på Raspberry Pi? / Java compared to C#, which sorts fastest on Raspberry Pi?

Olofsson, Christoffer January 2015 (has links)
I denna studie skall Java och C# ställas mot varandra och köras på en Raspberry Pi för att se vilken av dem som kan sortera heltalsvektorer snabbast. Som Java-motor kommer Hot-Spot att användas och Mono för C# och de ska sortera vektorer med sorteringsalgoritmer från språkens stödbibliotek och en implementerad algoritm baserad på urvalssortering. Detta arbete är till för att dem som vill arbeta med ett objektorienterat språk på Raspberry Pi, men inte har bestämt sig än för vilket som skall användas. Resultatet visar att Java presterar bättre än C# i de flesta fall och att det finns undantag då C# presterar bättre. / In this study, Java and C# is set against each other and running on a Raspberry Pi to see if they have similar processing times, or if there is a clear difference between the two languages. As Java-engine HotSpot will be used and Mono for C# and they will sort vectors with sorting algorithms from the language's support library and one implemented algorithm based on selection sort. This work is for those who want to work with an object-oriented language on Raspberry Pi, but has not decided yet on which one to choose. The result shows that Java performs better than C# in most cases, but in some cases C# is performing better.
145

Bayesian and Frequentist Approaches for the Analysis of Multiple Endpoints Data Resulting from Exposure to Multiple Health Stressors.

Nyirabahizi, Epiphanie 08 March 2010 (has links)
In risk analysis, Benchmark dose (BMD)methodology is used to quantify the risk associated with exposure to stressors such as environmental chemicals. It consists of fitting a mathematical model to the exposure data and the BMD is the dose expected to result in a pre-specified response or benchmark response (BMR). Most available exposure data are from single chemical exposure, but living objects are exposed to multiple sources of hazards. Furthermore, in some studies, researchers may observe multiple endpoints on one subject. Statistical approaches to address multiple endpoints problem can be partitioned into a dimension reduction group and a dimension preservative group. Composite scores using desirability function is used, as a dimension reduction method, to evaluate neurotoxicity effects of a mixture of five organophosphate pesticides (OP) at a fixed mixing ratio ray, and five endpoints were observed. Then, a Bayesian hierarchical model approach, as a single unifying dimension preservative method is introduced to evaluate the risk associated with the exposure to mixtures chemicals. At a pre-specied vector of BMR of interest, the method estimates a tolerable area referred to as benchmark dose tolerable area (BMDTA) in multidimensional Euclidean plan. Endpoints defining the BMDTA are determined and model uncertainty and model selection problems are addressed by using the Bayesian Model Averaging (BMA) method.
146

Transport av styckegods på järnväg: en utredande studie / Transportation of break-bulk cargo on railway: an investigating stydy

Häggblom, Linnea, Norman, Mikael January 2017 (has links)
In today´s society large quantities of goods are shipped both domestic in Sweden as well as across borders. The increasing flow of goods require higher demands on the capacity of the infrastructure and at the same time think and act for a sustainable environment. The overall aim of the study is to highlight development areas for cargo freight by railway. The more specific goals of the study is to identify what enablers that is needed when to establish an intermodal terminal that will handle break-bulk cargo directly from railway, and what barriers that might be. To achieve the aim and goal a qualitative study was conducted with interviews as primal data. Studying the market prerequisites and conducting a competitive intelligence and a benchmark conducted the result. In the result, the interviews and the examined documents has ben compiled, these data has been the basis for the analysis and conclusion. Large parts of the collected data indicated that break bulk cargo handling in intermodal terminals is currently something that is not offered and it is not considered to be economically viable. However, the study also revealed that break bulk cargo handling is a desired service from the business sector as it is regarded as an environmentally friendly mode of transportation, something that the business sector appreciate. To cope with this kind of cargo handling and transportation more research is needed, better cooperation between the private- and the public sector as well as infrastructure changes. / Idag fraktas stora mängder gods både inom Sverige och över landsgränserna. Det ökande godsflödet ställer högre krav på infrastrukturens kapacitet samtidigt som det blir allt viktigare med transporter som är hållbara ur miljösynpunkt. Studiens övergripande syfte är att belysa utvecklingsmöjligheter av godstransport på järnväg. Mer specifika mål för studien är att identifiera vilka förutsättningar som bör finnas vid etablering av kombiterminal med styckegodshantering direkt från räls samt vilka barriärer det finns mot det. För att uppnå undersökningens syfte och mål genomfördes en kvalitativ studie med intervjuer som primärdata. Genom att studera marknadsförutsättningar, genomföra en omvärldsbevakning och en benchmark utformades resultatet. I resultatet har intervjuerna tillsammans med den granskade dokumentationen sammanställts, dessa data har sedan legat som grund till analysen och slutsatsen. Stora delar av insamlade data pekade mot att styckegodshantering på kombiterminal i dagens läge inte är något som erbjuds samt att det inte anses vara ekonomiskt hållbart att hantera styckegods på kombiterminal. Dock visade undersökningen att styckegodshantering på kombiterminal är en önskad tjänst från näringslivet då det anses vara ett miljövänligt transportsätt för gods, något som näringslivet värdesätter. För att klara av denna typ av hantering och transport krävs mer forskning, bättre samarbete mellan den privata och offentliga sektorn samt infrastrukturella förändringar.
147

Uma formulação por média-variância multi-período para o erro de rastreamento em carteiras de investimento. / A multi-period mean-variance formulation of tracking error for portfolio selection.

Zabala, Yeison Andres 24 February 2016 (has links)
Neste trabalho, deriva-se uma política de escolha ótima baseada na análise de média-variância para o Erro de Rastreamento no cenário Multi-período - ERM -. Referindo-se ao ERM como a diferença entre o capital acumulado pela carteira escolhida e o acumulado pela carteira de um benchmark. Assim, foi aplicada a metodologia abordada por Li-Ng em [24] para a solução analítica, obtendo-se dessa maneira uma generalização do caso uniperíodo introduzido por Roll em [38]. Em seguida, selecionou-se um portfólio do mercado de ações brasileiro baseado no fator de orrelação, e adotou-se como benchmark o índice da bolsa de valores do estado de São Paulo IBOVESPA, além da taxa básica de juros SELIC como ativo de renda fixa. Dois casos foram abordados: carteira composta somente de ativos de risco, caso I, e carteira com um ativo sem risco indexado à SELIC - e ativos do caso I (caso II). / In this work, an optimal policy for portfolio selection based on mean-varian e analysis for the multi-period tracking error - ERM - was derived. ERM is understood as the difference between the capital raised by the selected portfolio and benchmark portfolio. Thus, the methodology discussed by Li-Ng in [24] for analytical solution was applied, generalizing the single period case introduced by Roll in [38]. Then, it was selected a portfolio from the Brazilian stock trading based on the correlation factor, and adopted as benchmark the index of the stock trading of São Paulo State IBOVESPA, and the basic interest rate SELIC as fixed income asset. Two cases were dealt: portfolio composed of risky assets only, case I, and portfolio with a risk-free asset - indexed to SELIC - and assets of the case I (case II).
148

Os programas de melhoria realmente importam?: uma avaliação em uma empresa de manufatura

Souza, Iberê Guarani de 31 January 2014 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-07-15T14:12:15Z No. of bitstreams: 1 Iberê Guarani de Souza.pdf: 3328129 bytes, checksum: 6f2250512f214402473b148d4ad6e48d (MD5) / Made available in DSpace on 2015-07-15T14:12:15Z (GMT). No. of bitstreams: 1 Iberê Guarani de Souza.pdf: 3328129 bytes, checksum: 6f2250512f214402473b148d4ad6e48d (MD5) Previous issue date: 2014-01-31 / Nenhuma / As empresas buscam, a cada dia, melhorar sua eficiência produtiva e, com isso, aumentar sua lucratividade e competitividade. Para tanto, há diversas formas de descobrir os fatores críticos de competitividade que podem estar presentes nos mais diversos setores de manufatura. Logo, o uso de técnicas robustas para avaliar e medir esses fatores torna-se essencial para o suporte à tomada de decisão. Este estudo tem o objetivo de analisar a influência dos processos de melhoria contínua e de aprendizado em termos de eficiência e volume de produção em uma empresa de manufatura. Para atingir o objetivo proposto, a pesquisa realiza um estudo de caso utilizando a Análise Envoltória de Dados (DEA), combinada com o teste de Regressão Linear e o teste de ANOVA. Nesta etapa, formula-se um modelo conceitual com quatro hipóteses principais e oito hipóteses secundárias. Para avaliação da eficiência DEA, o modelo utiliza retornos variáveis de escala (VSR) com orientação a insumo, considerando as principais matérias-primas utilizadas pela empresa com base no custo variável total. O teste de Regressão Linear efetua a avaliação do impacto do processo de melhoria e de aprendizado na eficiência (DEA). Por sua vez, o teste de ANOVA avalia as médias da eficiência de cada linha de produção para cada ano analisado. O estudo realiza-se de forma longitudinal, com avaliação de seis anos de produção de manufatura. Os resultados da pesquisa mostram que apenas uma das linhas de produção aumentou a eficiência ao longo do tempo. Além disso, indicam que duas linhas de produção tiveram impacto das ações de melhoria no volume de produção. Logo, as variáveis referentes aos programas de Kaizen, às horas de treinamento e à experiência dos funcionários influenciaram significativamente o modelo. Verifica-se que os projetos voltados à melhoria contínua e ao aprendizado não foram suficientes para aumentar a eficiência em duas importantes linhas de produção. Além disso, o estudo elucida que o volume de produção impacta negativamente a eficiência de uma das linhas de produção. Com a análise, é possível identificar quais fatores são representativos para aumentar a eficiência produtiva. Logo, conclui-se que a atualização tecnológica constitui um fator importante a ser seguido pela empresa estudada. / Everyday, companies seek to improve their productive efficiency and, thus, increase their profitability and competitiveness. For both, there are several ways to discover the critical factors of competitiveness that may be present in various manufacturing sectors. Thus, the use of robust techniques to assess and measure these factors is essential to support decision making. This study aims to analyze the influence of the processes of continuous improvement and learning in terms of efficiency and production volume in a manufacturing company. To achieve the proposed objective, the research conducts a case study using Data Envelopment Analysis (DEA), combined with the Linear Regression test and the ANOVA test. At this stage, a conceptual model with four main hypotheses and eight secondary ones is formulated. To evaluate the DEA efficiency, the model uses Variable Scale Returns (VSR) with input orientation considering the main raw materials used by the company based on the total variable cost. The Linear Regression test performs the evaluation of the impact of the improvement process and learning efficiency (DEA). In turn, the ANOVA test evaluates the average efficiency of each production line for each year analyzed. The study is carried out longitudinally, by reviewing six years of manufacturing. The survey results show that only one of the production lines increased efficiency over time. In addition, the results indicate that two production lines have been impacted by the actions of improvement in the volume of production. Therefore, the variables related to Kaizen programs, to the hours of training and to employees ́ experience significantly influenced the model. It can be concluded that the projects focused on continuous improvement and learning were not sufficient to increase efficiency in two major production lines. Furthermore, the study shows that the production volume negatively impacts the efficiency of the production lines. With the analysis, it is possible to identify which factors are representative to increase production efficiency. Therefore, it can be conclude that the technology upgrade is an important factor to be followed by the company studied.
149

Estudo, avaliação e comparação de técnicas de detecção não supervisionada de outliers / Study, evaluation and comparison of unsupervised outlier detection techniques

Campos, Guilherme Oliveira 05 March 2015 (has links)
A área de detecção de outliers (ou detecção de anomalias) possui um papel fundamental na descoberta de padrões em dados que podem ser considerados excepcionais sob alguma perspectiva. Detectar tais padrões é relevante de maneira geral porque, em muitas aplicações de mineração de dados, tais padrões representam comportamentos extraordinários que merecem uma atenção especial. Uma importante distinção se dá entre as técnicas supervisionadas e não supervisionadas de detecção. O presente projeto enfoca as técnicas de detecção não supervisionadas. Existem dezenas de algoritmos desta categoria na literatura e novos algoritmos são propostos de tempos em tempos, porém cada um deles utiliza uma abordagem própria do que deve ser considerado um outlier ou não, que é um conceito subjetivo no contexto não supervisionado. Isso dificulta sensivelmente a escolha de um algoritmo em particular em uma dada aplicação prática. Embora seja de conhecimento comum que nenhum algoritmo de aprendizado de máquina pode ser superior a todos os demais em todos os cenários de aplicação, é uma questão relevante se o desempenho de certos algoritmos em geral tende a dominar o de determinados outros, ao menos em classes particulares de problemas. Neste projeto, propõe-se contribuir com o estudo, seleção e pré-processamento de bases de dados que sejam apropriadas para se juntarem a uma coleção de benchmarks para avaliação de algoritmos de detecção não supervisionada de outliers. Propõe-se ainda avaliar comparativamente o desempenho de métodos de detecção de outliers. Durante parte do meu trabalho de mestrado, tive a colaboração intelectual de Erich Schubert, Ira Assent, Barbora Micenková, Michael Houle e, principalmente, Joerg Sander e Arthur Zimek. A contribuição deles foi essencial para as análises dos resultados e a forma compacta de apresentá-los. / The outlier detection area has an essential role in discovering patterns in data that can be considered as exceptional in some perspective. Detect such patterns is important in general because, in many data mining applications, such patterns represent extraordinary behaviors that deserve special attention. An important distinction occurs between supervised and unsupervised detection techniques. This project focuses on the unsupervised detection techniques. There are dozens of algorithms in this category in literature and new algorithms are proposed from time to time, but each of them uses its own approach of what should be considered an outlier or not, which is a subjective concept in the unsupervised context. This considerably complicates the choice of a particular algorithm in a given practical application. While it is common knowledge that no machine learning algorithm can be superior to all others in all application scenarios, it is a relevant question if the performance of certain algorithms in general tends to dominate certain other, at least in particular classes of problems. In this project, proposes to contribute to the databases study, selection and pre-processing that are appropriate to join a benchmark collection for evaluating unsupervised outlier detection algorithms. It is also proposed to evaluate comparatively the performance of outlier detection methods. During part of my master thesis, I had the intellectual collaboration of Erich Schubert, Ira Assent, Barbora Micenková, Michael Houle and especially Joerg Sander and Arthur Zimek. Their contribution was essential for the analysis of the results and the compact way to present them.
150

Um benchmark para avaliação de técnicas de busca no contexto de análise de Mutantes sql / A benchmark to evaluation of search techniques in the context of sql mutation analysis

Queiroz, Leonardo Teixeira 02 August 2013 (has links)
Submitted by Cássia Santos (cassia.bcufg@gmail.com) on 2014-09-08T15:43:32Z No. of bitstreams: 2 Dissertacao Leonardo T Queiroz.pdf: 3060512 bytes, checksum: 9db02d07b1a185dc6a2000968c571ae9 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-09-08T15:43:32Z (GMT). No. of bitstreams: 2 Dissertacao Leonardo T Queiroz.pdf: 3060512 bytes, checksum: 9db02d07b1a185dc6a2000968c571ae9 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-08-02 / Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG / One of the concerns in test Applications Database (ADB) is to keep the operating and computational costs low. In the context of the ADB, one way to collaborate with this assumption is ensuring that the Test Databases (TDB) are small, but effective in revealing defects of SQL statements. Such bases can be constructed or obtained by the reduction of Production Databases (PDB). In the reductions case, there are combinatorial aspects involved that require the use of a specific technique for their implementation. In this context, in response to a deficiency identified in the literature, this work aims to build and provide a benchmark to enable performance evaluation, using SQL Mutation Analysis, any search technique that intends to conduct databases reductions. Therefore, to exercise the search techniques, the benchmark was built with two scenarios where each one is composed of a PDB and a set of SQL statements. In addition, as a reference for search techniques, it also contains performance of data database randomly reduced. As a secondary objective of this work, from the experiments conducted in the construction of the benchmark, analyses were made with the results obtained to answer important questions about what factors are involved in the complexity of SQL statements in the context of Test Mutation. A key finding in this regard was on the restrictiveness of SQL commands, and this is the factor that most influences the complexity of statements. / Uma das preocupações no teste de Aplicações de Bancos de Dados (ABD) é manter o custo operacional e computacional baixo. No contexto das ABD, uma das maneiras de colaborar com essa premissa é garantir que as bases de dados de teste (BDT) sejam pequenas, porém, eficazes na revelação de defeitos de instruções SQL. Tais bases podem ser construídas ou obtidas pela redução de grandes bases de dados de produção (BDP). No caso da redução, estão envolvidos aspectos combinatórios que exigem o uso de alguma técnica para a sua realização. Neste contexto, em resposta a uma carência identificada na literatura, o presente trabalho tem como objetivo construir e disponibilizar um benchmark para possibilitar a avaliação de desempenho, utilizando a Análise de Mutantes SQL, de qualquer técnica de busca que se proponha a realizar reduções de bases de dados. Sendo assim, para exercitar as técnicas de busca, o benchmark foi construído com dois cenários, onde cada um é composto por uma BDP e um conjunto de instruções SQL. Além disso, como uma referência para as técnicas de busca, ele é composto também por resultados de desempenho de bases de dados reduzidas aleatoriamente. Como objetivo secundário deste trabalho, a partir dos experimentos conduzidos na construção do benchmark, foram feitas análises dos resultados obtidos para responder importantes questões sobre quais fatores estão envolvidos na complexidade de instruções SQL no contexto da Análise de Mutantes. Uma das principais conclusões neste sentido foi sobre a restritividade dos comandos SQL, sendo este o fator que mais influencia na complexidade das instruções.

Page generated in 0.0765 seconds