Spelling suggestions: "subject:"parallell"" "subject:"paralela""
1 |
Inverse Discrete Cosine Transform by Bit Parallel Implementation and Power ComparisionBhardwaj, Divya Anshu January 2003 (has links)
<p>The goal of this project was to implement and compare Invere Discrete Cosine Transform using three methods i.e. by bit parallel, digit serial and bit serial. This application describes a one dimensional Discrete Cosine Transform by bit prallel method and has been implemented by 0.35 ìm technology. When implementing a design, there are several considerations like word length etc. were taken into account. The code was implemented using WHDL and some of the calculations were done in MATLAB. The VHDL code was the synthesized using Design Analyzer of Synopsis; power was calculated and the results were compared.</p>
|
2 |
Development of an incentive and scheduling mechanism for a Peer-to-Peer computing systemRius Torrentó, Josep Maria 25 January 2012 (has links)
Peer-to-Peer (P2P) computing offers new research challenges in the field of
distributed computing. This paradigm can take advantage of a huge number of
idle CPU cycles through Internet in order to solve very complex computational
problems. All these resources are provided voluntarily by millions of users
spread over the world. This means the cost of allocating and maintaining
the resources is split and assumed by each owner/peer. For this reason, P2P
computing can be seen as a low-cost alternative to expensive super-computers.
Obviously, not every kind of parallel application is suitable for a P2P computing
environment. Those with high communication requirements between
tasks or with high QoS needs should still be performed in a Local Area Networking
(LAN) environment. Otherwise, problems with huge computational
requirements that can be easily split into millions of independent tasks are
suitable for P2P computing, especially as solving these problems with a supercomputer
would be extremely expensive.
One of the most critical aspects in the design of P2P systems is the development
of incentive techniques to enforce cooperation and resource sharing
among participants. Incentive policies in P2P distributed computing systems
is a new research field that requires specific policies to fight against malicious
and selfish behavior by peers. Encouraging peers to collaborate in file-sharing
has been widely investigated but, in the P2P computing field, this issue is still
at a very early stage of research. Furthermore, the dynamics of peer participation
are an inherent property of P2P systems and critical for design and
evaluation. This further increases the difficulty of P2P computing.
Another critical aspect of P2P computing systems is the development of
scheduling techniques to achieve an efficient and scalable management of the
computational resources. Unlike file-sharing, based on such immutable resources
as files, the mutable ones, such as CPU and Memory are the principal
resources involved in P2P computing. Inside the scheduling field, P2P computing
can be seen as a particular variant of Grid computing. In a similar way
as with the incentive polices, an extensive list of publications can be found that
study the scheduling problems for distributed computing, such as Clusters or
Grid computing, but few of these focus on P2P computing. For this reason,
the scheduling problem in this kind of network is a field that still requires
research in depth.
In this thesis we propose a Distributed Incentive and Scheduling Integrated
Mechanism (DISIM) with a two-level topology and designed to work on largescale
distributed computing P2P systems. The low level is formed by associations
of peers controlled by super-peers with major responsibilities in managing
and gathering information about the state of these groups. Scalability limitations
on the first level are avoided by providing the mechanism with an upper
level, made up of super-peers interconnected through a logical overlay.
Regarding incentives, we propose a mechanism based on credits with a twolevel
topology designed to operate on different platforms of shared computing
networks. One of the main contributions is a new policy for managing the
credits, called Weighted, that increases peer participation significantly. This
mechanism reflects P2P user dynamics, penalizes free-riders efficiently and
encourages peer participation. Moreover, the use of a popular pricing strategy,
called reverse Vickrey Auction, protects the system against malicious peer
behavior. Simulation results show that our policy outperforms alternative
approaches, maximizing system throughput and limiting free-riding behavior
by peers.
From the scheduling point of view, the low-level scheduler takes user dynamism
into account and is almost optimal since it holds all the status information
about the workload and computational power of its constituent peers.
Our main contribution at the upper level is to propose three criteria that only
use local information for scheduling tasks, providing the overall system with
scalability. By setting these criteria, the system can easily, dynamically and
rapidly adapt its behavior to very different kinds of parallel jobs in order toachieve an efficient performance. The results obtained proved the efficiency
of the overall model and the convergence with the best assignment, achieved
with an ideal centralized policy with global information.
|
3 |
Inverse Discrete Cosine Transform by Bit Parallel Implementation and Power ComparisionBhardwaj, Divya Anshu January 2003 (has links)
The goal of this project was to implement and compare Invere Discrete Cosine Transform using three methods i.e. by bit parallel, digit serial and bit serial. This application describes a one dimensional Discrete Cosine Transform by bit prallel method and has been implemented by 0.35 ìm technology. When implementing a design, there are several considerations like word length etc. were taken into account. The code was implemented using WHDL and some of the calculations were done in MATLAB. The VHDL code was the synthesized using Design Analyzer of Synopsis; power was calculated and the results were compared.
|
4 |
Técnicas e arquitetura para captura de traços e execução especulativa / Tecnhiques and architecture for trace detection and speculative executionPorto, João Paulo 17 August 2018 (has links)
Orientador: Guido Costa Souza de Araújo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-17T08:05:03Z (GMT). No. of bitstreams: 1
Porto_JoaoPaulo_D.pdf: 1983380 bytes, checksum: edef40fbafe26ce2d849308b08daf786 (MD5)
Previous issue date: 2011 / Resumo: É sabido que o modelo de desenvolvimento de micro-processadores baseado na extração de Instruction-Level Parallelism (ILP) de código sequencial atingiu seu limite. Encontrar soluções escaláveis e eficientes que permitam a manutenção de inúmeras instruções em execução simultaneamente tem se mostrado um desafio maior que o imaginado. Neste sentido, arquitetos e micro-arquitetos de computadores vêm buscando soluções alternativas para o desenvolvimento de novas arquiteturas. Dentre as soluções existentes, vêm ganhando força as baseadas na extração de Thread-Level Parallelism (TLP). Resumidamente, TLP é um tipo de paralelismo que tenta quebrar um programa sequencial em tarefas relativamente independentes entre si para executá-las em paralelo. TLP pode ser extraído por hardware ou software. Idealmente, uma solução híbrida deve ser utilizada, com o software realizando a identificação das oportunidades de extração de TLP, e com o hardware provendo suporte para execução do código gerado. Com tal solução de compromisso, o hardware fica livre da necessidade de especular, e o software pode trabalhar com maiores garantias. Nesta Tese, estudaram-se formas automáticas de paralelização e extração de TLP. Inicialmente, focou-se em traces dinâmicos de execução de programas sequenciais. Técnicas existentes (tais como MRET e Trace Trees) mostraram-se inapropriadas, de modo que desenvolveu-se uma nova técnica chamada Compact Trace Tree (CTT), que mostrou-se mais rápida que Trace Trees. Trace Tree (TT) também apresentam grande nível de especialização de código (tail duplication), característica ausente em MRET. Além de CTT, esta Tese apresenta Trace Execution Automata (TEA), um autômato que representa traces de execução. Esta representação revelou, em nossos experimentos, quase 80% de economia de espaço quando comparada com a representação usual. A seguir, o foco da Tese foi voltado para laços de execução e para paralelização estática de código sequencial através de Decoupled Software Pipeline (DSWP). Nosso primeiro resultado nesta direção, usando Java, mostrou claramente que sem nenhum suporte em hardware, a paralelização estática de programas poderia atingir um ganho de desempenho médio de 48% nas aplicações paralelizadas. Finalmente, a Tese propõe um modelo de execução paralelo baseado em DSWP que permite a consistência de dados entre as diversas threads de programas paralelizados. Apesar de não avaliar esta arquitetura completamente, os resultados iniciais são promissores. Além disso, o suporte necessário em hardware é simples e acomoda-se sobre o protocolo de coerência de cache existente, sem alterações sensíveis no processador / Abstract: The usual, Insturction-Level Parallelism (ILP)-oriented, microprocessor development model is known to have reached a hard-to-break limit. Finding scalable and efficient solutions that keep several instructions on-the-fly simultaneously has proven to be moredifficult than imagined. In this sense, computer architects and micro-architects have been seeking alternatives to develop new architectures. Among all, the TLP-based solutions are gaining strength. In short, TLP strives to break a sequential program into quasi-independent tasks in order to execute them in parallel. TLP can be extracted either by hardware or software. Ideally, a hybrid solution would be employed, with the software being responsible to identifying TLP opportunities, and the hardware offering support for the parallel code execution. With such solution, the hardware is free from the heavy speculation burden, whilst the software can be parallelized with more warranties. In this Thesis, automatic parallelization and TLP strategies were studied. The research first focused on dynamic execution traces. Existing techniques, such as MRET and Trace Trees proved unsuitable for our goals, which led us to develop a new trace identification technique called Compact Trace Trees, which showed to be faster than Trace Trees. Compact Trace Trees also present trace specialization, which MRET lacks. Besides Compact Trace Trees, this Thesis presents a new trace representation called Trace Execution Automata, an automaton representing the execution traces. This technique revealed nearly 80% memory size savings when compared to the usual, code duplication representation. Next, the Thesis' focus shifted to parallelizing loops statically. Our initial result in this direction, using Java and without any hardware support, clearly revealed that static parallelization of sequential programs could reach a 48% average speedup when compared to their sequential execution. Finally, a new, Decoupled Software Pipelining-based execution model with automatic data coherence amongst parallelized programs'threads is proposed by the Thesis. Despite the lack of a full model evaluation, the initial results are promising. Differently from other proposals, the hardware support necessary for this architecture is simple and builds upon the existing cache coherence protocol, without any modifications to this sensitive system component / Doutorado / Doutor em Ciência da Computação
|
5 |
Zvyšování úspěšnosti webových stránek A/B a multivariantním testováním / Using A/B testing and multivariate testing to increase web profitabilityPajskr, Josef January 2008 (has links)
This text concerns methods of parallel quantitative testing of web pages in order to increase their profitability. These methods bring new possibilities and overcome drawbacks of their ancestors. This work deals in detail with preparation, execution and result evaluations of quantitative tests. In chapters concerning test preparation this text describes the right technique of hypothesis formulation and selection of method for their verification. Text continues with description of tools available for conducting quantitative testing. Following chapters contain set of best practices and suggestions for successful testing. In its practical part, the text proves benefits of these methods by results interpretation of two successful tests conducted on a real e-commerce project and corporate website. Practical part also describes technical aspects of conducted tests in detail.
|
6 |
Vývoj vláknových aplikací v jazyce Java / Development of threads's applications in JavaATTL, Karel January 2008 (has links)
This diploma thesis is aimed at programming of multithreaded applications in Java. With Java 5 comes package java.util.concurrent, which in an important way makes developing of parallel applications easier and more effective. This work is conceived as an introduction to programming of multithreaded applications in Java and could be also used as an educational material. Theoretical introduction about processes and technological background of multitasking gives analogy to threads, at the same time it is touching on Java technology and how Java works with memory. The rest of this diploma thesis concerns practical work with threads. This topic is covered from absolute beginning, which means creating Thread objects, including advanced topics like working with package java.util.concurrent and also some problems that can appear when writing multithreaded applications.
|
7 |
A Haptic Device Interface for Medical Simulations using OpenCL / Ett haptiskt gränssnitt för medicinska simuleringar med OpenCLMachwirth, Mattias January 2013 (has links)
The project evaluates how well a haptic device can be used to interact with a visualization of volumetric data. Since the interface to the haptic device require explicit surface descriptions, triangles had to be constructed from the volumetric data. The algorithm used to extract these triangles is marching cubes. The triangles produced by marching cubes are then transmitted to the haptic device to enable the force feedback. Marching cubes was suitable for parallelization and it was executed using OpenCL. Graphs in the report shows how this parallelization ran almost 70 times faster than the sequential CPU counterpart of the same algorithm. Further development of the project would give medical students the opportunity to practice difficult procedures on a simulation instead of a real patient. This would give a realistic and accurate simulation to practice on. / Projektet går ut på att utvärdera hur väl en haptisk utrustning går att använda för att interagera med en visualisering av volumetrisk data. Eftersom haptikutrustningen krävde explicit beskrivna ytor, krävdes först en triangelgenerering utifrån den volymetriska datan. Algoritmen som används till detta är marching cubes. Trianglarna som producerades med hjälp av marching cubes skickas sedan vidare till den haptiska utrustningen för att kunna få gensvar i form av krafter för att utnyttja sig av känsel och inte bara syn. Eftersom marching cubes lämpas för en parallelisering användes OpenCL för att snabba upp algoritmen. Grafer i projektet visar hur denna algoritm exekveras upp emot 70 gånger snabbare när algoritmen körs som en kernel i OpenCL istället för ekvensiellt på CPUn. Tanken är att när vidareutveckling av projektet har gjorts i god mån, kan detta användas av läkarstuderande där övning av svåra snitt kan ske i en verklighetstrogen simulering innan samma ingrepp utförs på en individ.
|
8 |
Analýza dat ze sekvenování příští generace ke studiu aktivity transposonů v nádorových buňkách / Analysis of NGS data for study of transposon activity in cancer cellsHrazdilová, Ivana January 2013 (has links)
Theoretical part of this diploma thesis gives a brief characteristic of human mobile elements (transposons), which represents nearly 50% of human genome. It provides basic transposon clasification and describes types of transposons present in hunam genome, as well as mobilization, activation and regulation mechanisms. The work also deals with the domestication of transposons, describes the ways in which TE contribute to DNA damage and summarizes the diseases caused by mutagenic activity of transposons in the human genome. Conclusion of theoretical part describes next-generation sequencing technologies (NGS). As practical part, data from RNA-seq experimet were analyzed in order to compare differen transposon activity in normal and cancer cells from prostate and colorectal tissues. As like as publicly available sophisticated tools (TopHat), new scripts were created to analyze these data. The results show that cancer cells exhibit overexpression of transposons. This corresponds with the published results and suggests a connection of transposon activation with cancer development.
|
Page generated in 0.0412 seconds