• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 6
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 39
  • 19
  • 13
  • 13
  • 12
  • 11
  • 10
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

G2P-DBSCAN: Estratégia de Particionamento de Dados e de Processamento Distribuído fazer DBSCAN com MapReduce. / G2P-DBSCAN: Data Partitioning Strategy and Distributed Processing of DBSCAN with MapReduce.

Araújo Neto, Antônio Cavalcante January 2016 (has links)
ARAÚJO NETO, Antônio Cavalcante. G2P-DBSCAN: Estratégia de Particionamento de Dados e de Processamento Distribuído fazer DBSCAN com MapReduce. 2016. 63 f. Dissertação (mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2016. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-03-22T19:21:02Z No. of bitstreams: 1 2016_dis_acaraujoneto.pdf: 5671232 bytes, checksum: ce91a85d087f63206ad938133c163560 (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-04-25T12:33:12Z (GMT) No. of bitstreams: 1 2016_dis_acaraujoneto.pdf: 5671232 bytes, checksum: ce91a85d087f63206ad938133c163560 (MD5) / Made available in DSpace on 2016-04-25T12:33:12Z (GMT). No. of bitstreams: 1 2016_dis_acaraujoneto.pdf: 5671232 bytes, checksum: ce91a85d087f63206ad938133c163560 (MD5) Previous issue date: 2016 / Clustering is a data mining technique that brings together elements of a data set such so that the elements of a same group are more similar to each other than to those from other groups. This thesis studied the problem of processing the clustering based on density DBSCAN algorithm distributedly through the MapReduce paradigm. In the distributed processing it is important that the partitions are processed have approximately the same size, provided that the total of the processing time is limited by the time the node with a larger amount of data leads to complete the computation of data assigned to it. For this reason we also propose a data set partitioning strategy called G2P, which aims to distribute the data set in a balanced manner between partitions and takes into account the characteristics of DBSCAN algorithm. More Specifically, the G2P strategy uses grid and graph structures to assist in the division of space low density regions. Distributed DBSCAN the algorithm is done processing MapReduce two stages and an intermediate phase that identifies groupings that can were divided into more than one partition, called candidates from merging. The first MapReduce phase applies the algorithm DSBCAN the partitions individually. The second and checks correcting, if necessary, merge candidate clusters. Experiments using data sets demonstrate that true G2P-DBSCAN strategy overcomes the baseline adopted in all the scenarios, both at runtime and quality of obtained partitions. / Clusterizaçao é uma técnica de mineração de dados que agrupa elementos de um conjunto de dados de forma que os elementos que pertencem ao mesmo grupo são mais semelhantes entre si que entre elementos de outros grupos. Nesta dissertação nós estudamos o problema de processar o algoritmo de clusterização baseado em densidade DBSCAN de maneira distribuída através do paradigma MapReduce. Em processamentos distribuídos é importante que as partições de dados a serem processadas tenham tamanhos proximadamente iguais, uma vez que o tempo total de processamento é delimitado pelo tempo que o nó com uma maior quantidade de dados leva para finalizar a computação dos dados a ele atribuídos. Por essa razão nós também propomos uma estratégia de particionamento de dados, chamada G2P, que busca distribuir o conjunto de dados de forma balanceada entre as partições e que leva em consideração as características do algoritmo DBSCAN. Mais especificamente, a estratégia G2P usa estruturas de grade e grafo para auxiliar na divisão do espaço em regiões de baixa densidade. Já o processamento distribuído do algoritmo DBSCAN se dá por meio de duas fases de processamento MapReduce e uma fase intermediária que identifica clusters que podem ter sido divididos em mais de uma partição, chamados de candidatos à junção. A primeira fase de MapReduce aplica o algoritmo DSBCAN nas partições de dados individualmente, e a segunda verifica e corrige, caso necessário, os clusters candidatos à junção. Experimentos utilizando dados reais mostram que a estratégia G2P-DBSCAN se comporta melhor que a solução utilizada para comparação em todos os cenários considerados, tanto em tempo de execução quanto em qualidade das partições obtidas.
2

G2P-DBSCAN: Data Partitioning Strategy and Distributed Processing of DBSCAN with MapReduce. / G2P-DBSCAN: EstratÃgia de Particionamento de Dados e de Processamento DistribuÃdo fazer DBSCAN com MapReduce.

AntÃnio Cavalcante AraÃjo Neto 17 August 2015 (has links)
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior / Clustering is a data mining technique that brings together elements of a data set such so that the elements of a same group are more similar to each other than to those from other groups. This thesis studied the problem of processing the clustering based on density DBSCAN algorithm distributedly through the MapReduce paradigm. In the distributed processing it is important that the partitions are processed have approximately the same size, provided that the total of the processing time is limited by the time the node with a larger amount of data leads to complete the computation of data assigned to it. For this reason we also propose a data set partitioning strategy called G2P, which aims to distribute the data set in a balanced manner between partitions and takes into account the characteristics of DBSCAN algorithm. More Specifically, the G2P strategy uses grid and graph structures to assist in the division of space low density regions. Distributed DBSCAN the algorithm is done processing MapReduce two stages and an intermediate phase that identifies groupings that can were divided into more than one partition, called candidates from merging. The first MapReduce phase applies the algorithm DSBCAN the partitions individually. The second and checks correcting, if necessary, merge candidate clusters. Experiments using data sets demonstrate that true G2P-DBSCAN strategy overcomes the baseline adopted in all the scenarios, both at runtime and quality of obtained partitions. / ClusterizaÃao à uma tÃcnica de mineraÃÃo de dados que agrupa elementos de um conjunto de dados de forma que os elementos que pertencem ao mesmo grupo sÃo mais semelhantes entre si que entre elementos de outros grupos. Nesta dissertaÃÃo nÃs estudamos o problema de processar o algoritmo de clusterizaÃÃo baseado em densidade DBSCAN de maneira distribuÃda atravÃs do paradigma MapReduce. Em processamentos distribuÃdos à importante que as partiÃÃes de dados a serem processadas tenham tamanhos proximadamente iguais, uma vez que o tempo total de processamento à delimitado pelo tempo que o nà com uma maior quantidade de dados leva para finalizar a computaÃÃo dos dados a ele atribuÃdos. Por essa razÃo nÃs tambÃm propomos uma estratÃgia de particionamento de dados, chamada G2P, que busca distribuir o conjunto de dados de forma balanceada entre as partiÃÃes e que leva em consideraÃÃo as caracterÃsticas do algoritmo DBSCAN. Mais especificamente, a estratÃgia G2P usa estruturas de grade e grafo para auxiliar na divisÃo do espaÃo em regiÃes de baixa densidade. Jà o processamento distribuÃdo do algoritmo DBSCAN se dà por meio de duas fases de processamento MapReduce e uma fase intermediÃria que identifica clusters que podem ter sido divididos em mais de uma partiÃÃo, chamados de candidatos à junÃÃo. A primeira fase de MapReduce aplica o algoritmo DSBCAN nas partiÃÃes de dados individualmente, e a segunda verifica e corrige, caso necessÃrio, os clusters candidatos à junÃÃo. Experimentos utilizando dados reais mostram que a estratÃgia G2P-DBSCAN se comporta melhor que a soluÃÃo utilizada para comparaÃÃo em todos os cenÃrios considerados, tanto em tempo de execuÃÃo quanto em qualidade das partiÃÃes obtidas.
3

Identifiering av områden med förhöjd olycksrisk för cyklister baserad på cykelhjälmsdata

Roos, Johannes, Lindqvist, Sven January 2020 (has links)
Antalet cyklister i Sverige väntas öka under kommande år, men trots stora insatser för trafiksäkerheten minskar inte antalet allvarliga cykelolyckor i samma takt som bilolyckor. Denna studie har tittat på cykelhjälm-tillverkaren Hövdings data som samlats in från deras kunder. Hjälmen fungerar som en krockkudde som löses ut vid en kraftig huvudrörelse som sker vid en olycka. Datan betsår av GPS-positioner tillsammans med ett värde från en Support Vector Machine (SVM) som indikerar hur nära en hjälm är att registrera en olycka och därmed lösas ut. Syftet med studien var att analysera denna data från cyklister i Malmö för att se om det går att identifiera platser som är överrepresenterade i antalet förhöjda SVM-nivåer, och om dessa platser speglar verkliga, potentiellt farliga trafiksituationer. Density-based spatial clustering of applications with noise (DBSCAN) användes för att identifiera kluster av förhöjda SVM-nivåer. DBSCAN är en oövervakad maskininlärningsalgoritm som ofta används för att klustra på spatial data med brusdata i datamängden. Från dessa kluster räknades antalet unika cykelturer som genererat en förhöjd SVM-nivå i klustret, samt totala antalet cykelturer som passerat genom klustret. 405 kluster identifierades och sorterades på flest unika cykelturer som genererat en förhöjd SVM-nivå, varpå de 30 översta valdes ut för närmare analys. För att validera klusterna mot registrerade cykelolyckor hämtades data från från Swedish Traffic Accident Data Acquisition (STRADA), den nationella olycksdatabasen i Sverige. De trettio utvalda klustren hade 0,082\% cykelolyckor per unik cykeltur i klustren och för resterande 375 kluster var siffran 0,041\%. Antal olyckor per kluster i de utvalda trettio klustren var 0,46 och siffran för övriga kluster var 0,064. De topp trettio klustren kategoriserades sedan i tre kategorier. De kluster som hade en eventuell förklaring till förhöjda SVM-nivåer, som farthinder och kullersten gavs kategori 1. Hövding har kommunicerat att sådana inslag i underlaget kan generera en lägre grad av förhöjd SVM-nivå. Kategori 2 var de kluster som hade haft en byggarbetsplats inom klustret. Kategori 3 var de kluster som inte kunde förklaras med något av de andra två kategorierna. Andel olyckor per unik cykeltur i kluster som tillhörde kategori 1 var 0,068\%, för kategori 2 0,071\% och kategori 3 0,106\%. Resultaten indikerar att denna data är användbar för att identifiera platser med förhöjd olycksrisk för cyklister. Datan som behandlats i denna studie har en rad svagheter i sig varpå resultaten bör tolkas med försiktigthet. Exempelvis är datamängden från en kort tidsperiod, ca 6 månader, varpå säsongsbetingat cykelbeteende inte är representerat i dataunderlaget. Det antas även förekomma en del brusdata, vilket eventuellt har påverkat resultaten. Men det finns potential i denna typ av data att i framtiden, när mer data samlats in, med större träffsäkerhet kunna identifiera olycksdrabbade platser för cyklister. / The number of cyclists in Sweden is expected to increase in the coming years, but despite major efforts in road safety, the number of serious bicycle accidents does not decrease at the same rate as car accidents.This study has looked at the data collected by the bicycle helmet manufacturer Hövding's customers. The helmet acts as an airbag that is triggered when a strong head movement occurs in the event of an accident. The data consists of GPS positions along with a Support Vector machine (SVM)- generated value which indicates how close the helmet is to registering an accident, and thus is triggered. The purpose of the study was to analyze this data from cyclists in Malmö to see if it's possible to identify places that are over-represented in the number of elevated SVM levels, and whether these sites reflect real, potentially dangerous traffic situations. Density-based spatial clustering of applications with noise (DBSCAN) was used to identify clusters of elevated SVM levels. DBSCAN is an unsupervised clustering algorithm widely used when clustering on spatial data. From these clusters, the number of unique cycle trips that generated an elevated SVM level in the cluster was calculated, as well as the total number of cycle trips that passed through each cluster. 405 clusters were identified and sorted by the highest number of unique bike rides that generated an elevated SVM level, whereupon the top 30 were selected for further analysis. In order to validate the clusters against registered bicycle accidents, data were obtained from the Swedish Traffic Accident Data Acquisition (STRADA), the national accident database in Sweden. The thirty selected clusters had 0.082 \% cycling accidents per unique cycle trip in the clusters and for the remaining 375 clusters the figure was 0.041 \%. The number of accidents per cluster in the selected thirty clusters was 0.46 and the number for the other clusters was 0.064. The top thirty clusters were then categorized into three categories. The clusters that had a possible explanation for elevated SVM levels, such as cruise barriers and cobblestones were given category 1. Hövding has communicated that such elements in the substrate can generate elevated SVM levels. Category 2 was the clusters that had a construction site within the cluster. Category 3 was the clusters that could not be explained by any of the other two categories. The proportion of accidents per unique cycle trip in clusters belonging to category 1 was 0.068 \%, for category 2 0.071 \% and for category 3 0.106 \%.The results indicate that this data is useful for identifying places with increased risk of accidents for cyclists. The data processed in this study has a number of weaknesses in itself and the results should be interpreted with caution. For example, the data is from a short period of time, about 6 months, whereby seasonal cycling behavior is not represented in the data set. The data set is also assumed to contain some noisy data, which may have affected the results. But there is potential in this type of data so that in the future, when more data is collected, it can be used to identify places with higher risk of accidents for cyclists with greater accuracy.
4

Article identification for inventory list in a warehouse environment

Gao, Yang January 2014 (has links)
In this paper, an object recognition system has been developed that uses local image features. In the system, multiple classes of objects can be recognized in an image. This system is basically divided into two parts: object detection and object identification. Object detection is based on SIFT features, which are invariant to image illumination, scaling and rotation. SIFT features extracted from a test image are used to perform a reliable matching between a database of SIFT features from known object images. Method of DBSCAN clustering is used for multiple object detection. RANSAC method is used for decreasing the amount of false detection. Object identification is based on 'Bag-of-Words' model. The 'BoW' model is a method based on vector quantization of SIFT descriptors of image patches. In this model, K-means clustering and Support Vector Machine (SVM) classification method are applied.
5

An Evaluation of Clustering and Classification Algorithms in Life-Logging Devices

Amlinger, Anton January 2015 (has links)
Using life-logging devices and wearables is a growing trend in today’s society. These yield vast amounts of information, data that is not directly overseeable or graspable at a glance due to its size. Gathering a qualitative, comprehensible overview over this quantitative information is essential for life-logging services to serve its purpose. This thesis provides an overview comparison of CLARANS, DBSCAN and SLINK, representing different branches of clustering algorithm types, as tools for activity detection in geo-spatial data sets. These activities are then classified using a simple model with model parameters learned via Bayesian inference, as a demonstration of a different branch of clustering. Results are provided using Silhouettes as evaluation for geo-spatial clustering and a user study for the end classification. The results are promising as an outline for a framework of classification and activity detection, and shed lights on various pitfalls that might be encountered during implementation of such service.
6

Density and partition based clustering on massive threshold bounded data sets

Kannamareddy, Aruna Sai January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / The project explores the possibility of increasing efficiency in the clusters formed out of massive data sets which are formed using threshold blocking algorithm. Clusters thus formed are denser and qualitative. Clusters that are formed out of individual clustering algorithms alone, do not necessarily eliminate outliers and the clusters generated can be complex, or improperly distributed over the data set. The threshold blocking algorithm, a current research paper from Michael Higgins of Statistics Department on other hand, in comparison with existing algorithms performs better in forming the dense and distinctive units with predefined threshold. Developing a hybridized algorithm by implementing the existing clustering algorithms to re-cluster these units thus formed is part of this project. Clustering on the seeds thus formed from threshold blocking Algorithm, eases the task of clustering to the existing algorithm by eliminating the overhead of worrying about the outliers. Also, the clusters thus generated are more representative of the whole. Also, since the threshold blocking algorithm is proven to be fast and efficient, we now can predict a lot more decisions from large data sets in less time. Predicting the similar songs from Million Song Data Set using such a hybridized algorithm is considered as the data set for the evaluation of this goal.
7

Diseño de procesos para la segmentación de clientes según su comportamiento de compra y hábito de consumo en una empresa de consumo masivo

Rojas Araya, Javier Orlando January 2017 (has links)
Magíster en Ingeniería de Negocios con Tecnologías de Información / La industria de alimentos de consumo masivo ha ido evolucionando en el tiempo. Los primeros canales de venta para llegar a los clientes finales fueron los almacenes de barrio los que se vieron fuertemente amenazados con la proliferación de grandes cadenas de supermercados. La aparición de internet también creó un nuevo canal que permite a los clientes finales hacer pedidos de productos y pagarlos a través de aplicaciones móviles para finalmente recibirlos en su domicilio. A pesar de esta evolución en los canales, los almacenes de barrio se niegan a desaparecer. Son muchos los clientes que siguen prefiriendo la atención amable y personalizada de los almacenes junto con un abanico amplio de productos y precios atractivos. La empresa no está ajena a esta realidad y también comercializa sus productos a clientes finales por los canales supermercado y almacenes. Respecto a los almacenes se atiende mensualmente una cantidad aproximada de 25.000 clientes a nivel nacional donde existe una mayor concentración en la zona centro del país. Segmentar a estos clientes para conocer su comportamiento de compra y hábito de consumo se ha convertido en el eje central de la estrategia de este canal. Ya no basta con analizar los reportes de ventas para aumentar el rendimiento del Área Comercial. Este proyecto tiene por objetivo agrupar los clientes del canal Almacenes de la empresa bajo los conceptos de comportamiento de compra y hábito de consumo y lograr caracterizarlos. Para alcanzar esta meta se utiliza la metodología de Ingeniería de Negocios que parte desde la definición del posicionamiento estratégico, el modelo de negocio, la arquitectura de procesos, el diseño detallado de los procesos, el diseño del apoyo tecnológico que soportará a los procesos y finalmente la construcción y puesta en marcha de la solución. Además se utilizarán algoritmos propios para este tipo de tareas como son DBSCAN y K-Means. Los resultados obtenidos permiten segmentar a los clientes en siete grupos para el comportamiento de compra y siete para el hábito consumo. Con esto se puede responder las preguntas de cuándo, cuánto y qué compran los clientes del canal. El beneficio del proyecto se traduce en un aumento de las ventas por acciones que permiten recuperar a clientes que están en proceso de fugarse y por aumento del ticket promedio de aquellos clientes que realizan compras frecuentes pero de muy bajo monto de facturación. / 07/04/2022
8

Product categorisation using machine learning / Produktkategorisering med hjälp av maskininlärning

Stefan, Vasic, Nicklas, Lindgren January 2017 (has links)
Machine learning is a method in data science for analysing large data sets and extracting hidden patterns and common characteristics in the data. Corporations often have access to databases containing great amounts of data that could contain valuable information. Navetti AB wants to investigate the possibility to automate their product categorisation by evaluating different types of machine learning algorithms. This could increase both time- and cost efficiency. This work resulted in three different prototypes, each using different machine learning algorithms with the ability to categorise products automatically. The prototypes were tested and evaluated based on their ability to categorise products and their performance in terms of speed. Different techniques used for preprocessing data is also evaluated and tested. An analysis of the tests shows that when providing a suitable algorithm with enough data it is possible to automate the manual categorisation. / Maskininlärning är en metod inom datavetenskap vars uppgift är att analysera stora mängder data och hitta dolda mönster och gemensamma karaktärsdrag. Företag har idag ofta tillgång till stora mängder data som i sin tur kan innehålla värdefull information. Navetti AB vill undersöka möjligheten att automatisera sin produktkategorisering genom att utvärdera olika typer av maskininlärnings- algoritmer. Detta skulle dramatiskt öka effektiviteten både tidsmässigt och ekonomiskt. Resultatet blev tre prototyper som implementerar tre olika maskininlärnings-algoritmer som automatiserat kategoriserar produkter. Prototyperna testades och utvärderades utifrån dess förmåga att kategorisera och dess prestanda i form av hastighet. Olika tekniker som används för att förbereda data analyseras och utvärderas. En analys av testerna visar att med tillräckligt mycket data och en passande algoritm så är det möjligt att automatisera den manuella kategoriseringen.
9

Detecting Self-Correlation of Nonlinear, Lognormal, Time-Series Data via DBSCAN Clustering Method, Using Stock Price Data as Example

Huo, Shiyin 15 December 2011 (has links)
No description available.
10

Deinterleaving of radar pulses with batch processing to utilize parallelism / Gruppering av radar pulser med batch-bearbetning för att utnyttja parallelism

Lind, Emma, Stahre, Mattias January 2020 (has links)
The threat level (specifically in this thesis, for aircraft) in an environment can be determined by analyzing radar signals. This task is critical and has to be solved fast and with high accuracy. The received electromagnetic pulses have to be identified in order to classify a radar emitter. Usually, there are several emitters transmitting radar pulses at the same time in an environment. These pulses need to be sorted into groups, where each group contains pulses from the same emitter. This thesis aims to find a fast and accurate solution to sort the pulses in parallel. The selected approach analyzes batches of pulses in parallel to exploit the advantages of a multi-threaded Central Processing Unit (CPU) or a Graphics Processing Unit (GPU). Firstly, a suitable clustering algorithm had to be selected. Secondly, an optimal batch size had to be determined to achieve high clustering performance and to rapidly process the batches of pulses in parallel. A quantitative method based on experiments was used to measure clustering performance, execution time, system response, and parallelism as a function of batch sizes when using the selected clustering algorithm. The algorithm selected for clustering the data was Density-based Spatial Clustering of Applications with Noise (DBSCAN) because of its advantages, such as not having to specify the number of clusters in advance, its ability to find arbitrary shapes of a cluster in a data set, and its low time complexity. The evaluation showed that implementing parallel batch processing is possible while still achieving high clustering performance, compared to a sequential implementation that used the maximum likelihood method.An optimal batch size in terms of data points and cutoff time is hard to determine since the batch size is very dependent on the input data. Therefore, one batch size might not be optimal in terms of clustering performance and system response for all streams of data. A solution could be to determine optimal batch sizes in advance for different streams of data, then adapt a batch size depending on the stream of data. However, with a high level of parallelism, an additional delay is introduced that depends on the difference between the time it takes to collect data points into a batch and the time it takes to process the batch, thus the system will be slower to output its result for a given batch compared to a sequential system. For a time-critical system, a high level of parallelism might be unsuitable since it leads to slower response times. / Genom analysering av radarsignaler i en miljö kan hotnivån bestämmas. Detta är en kritisk uppgift som måste lösas snabbt och med bra noggrannhet. För att kunna klassificera en specifik radar måste de elektromagnetiska pulserna identifieras. Vanligtvis sänder flera emittrar ut radarpulser samtidigt i en miljö. Dessa pulser måste sorteras i grupper, där varje grupp innehåller pulser från en och samma emitter. Målet med denna avhandling är att ta fram ett sätt att snabbt och korrekt sortera dessa pulser parallellt. Den valda metoden använder grupper av data som analyserades parallellt för att nyttja fördelar med en multitrådad Central Processing Unit (CPU) eller en Central Processing Unit (CPU) or a Graphics Processing Unit (GPU). Först behövde en klustringsalgoritm väljas och därefter en optimal gruppstorlek för den valda algoritmen. Gruppstorleken baserades på att grupperna kunde behandlas parallellt och snabbt, samt uppnå tillförlitlig klustring. En kvantitativ metod användes som baserades på experiment genom att mäta klustringens tillförlitlighet, exekveringstid, systemets svarstid och parallellitet som en funktion av gruppstorlek med avseende på den valda klustringsalgoritmen. Density-based Spatial Clustering of Applications with Noise (DBSCAN) valdes som algoritm på grund av dess förmåga att hitta kluster av olika former och storlekar utan att på förhand ange antalet kluster för en mängd datapunkter, samt dess låga tidskomplexitet. Resultaten från utvärderingen visade att det är möjligt att implementera ett system med grupper av pulser och uppnå bra och tillförlitlig klustring i jämförelse med en sekventiell implementation av maximum likelihood-metoden. En optimal gruppstorlek i antal datapunkter och cutoff tid är svårt att definiera då storleken är väldigt beroende på indata. Det vill säga, en gruppstorlek måste inte nödvändigtvis vara optimal för alla typer av indataströmmar i form av tillförlitlig klustring och svarstid för systemet. En lösning skulle vara att definiera optimala gruppstorlekar i förväg för olika indataströmmar, för att sedan kunna anpassa gruppstorleken efter indataströmmen. Det uppstår en fördröjning i systemet som är beroende av differensen mellan tiden det tar att skapa en grupp och exekveringstiden för att bearbeta en grupp. Denna fördröjning innebär att en parallell grupp-implementation aldrig kommer kunna vara lika snabb på att producera sin utdata som en sekventiell implementation. Detta betyder att det i ett tidskritiskt system förmodligen inte är optimalt att parallellisera mycket eftersom det leder till långsammare svarstid för systemet.

Page generated in 0.0272 seconds