• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 96
  • 13
  • 9
  • 7
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 178
  • 178
  • 54
  • 36
  • 35
  • 33
  • 31
  • 25
  • 25
  • 22
  • 22
  • 20
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Avaliação do compartilhamento das memórias cache no desempenho de arquiteturas multi-core / Performance evaluation of shared cache memory for multi-core architectures

Alves, Marco Antonio Zanata January 2009 (has links)
No atual contexto de inovações em multi-core, em que as novas tecnologias de integração estão fornecendo um número crescente de transistores por chip, o estudo de técnicas de aumento de vazão de dados é de suma importância para os atuais e futuros processadores multi-core e many-core. Com a contínua demanda por desempenho computacional, as memórias cache vêm sendo largamente adotadas nos diversos tipos de projetos arquiteturais de computadores. Os atuais processadores disponíveis no mercado apontam na direção do uso de memórias cache L2 compartilhadas. No entanto, ainda não está claro quais os ganhos e custos inerentes desses modelos de compartilhamento da memória cache. Assim, nota-se a importância de estudos que abordem os diversos aspectos do compartilhamento de memória cache em processadores com múltiplos núcleos. Portanto, essa dissertação visa avaliar diferentes compartilhamentos de memória cache, modelando e aplicando cargas de trabalho sobre as diferentes organizações, a fim de obter resultados significativos sobre o desempenho e a influência do compartilhamento da memória cache em processadores multi-core. Para isso, foram avaliados diversos compartilhamentos de memória cache, utilizando técnicas tradicionais de aumento de desempenho, como aumento da associatividade, maior tamanho de linha, maior tamanho de memória cache e também aumento no número de níveis de memória cache, investigando a correlação entre essas arquiteturas de memória cache e os diversos tipos de aplicações da carga de trabalho. Os resultados mostram a importância da integração entre os projetos de arquitetura de memória cache e o projeto físico da memória, a fim de obter o melhor equilíbrio entre tempo de acesso à memória cache e redução de faltas de dados. Nota-se nos resultados, dentro do espaço de projeto avaliado, que devido às limitações físicas e de desempenho, as organizações 1Core/L2 e 2Cores/L2, com tamanho total igual a 32 MB (bancos de 2 MB compartilhados), tamanho de linha igual a 128 bytes, representam uma boa escolha de implementação física em sistemas de propósito geral, obtendo um bom desempenho em todas aplicações avaliadas sem grandes sobrecustos de ocupação de área e consumo de energia. Além disso, como conclusão desta dissertação, mostra-se que, para as atuais e futuras tecnologias de integração, as tradicionais técnicas de ganho de desempenho obtidas com modificações na memória cache, como aumento do tamanho das memórias, incremento da associatividade, maiores tamanhos da linha, etc. não devem apresentar ganhos reais de desempenho caso o acréscimo de latência gerado por essas técnicas não seja reduzido, a fim de equilibrar entre a redução na taxa de faltas de dados e o tempo de acesso aos dados. / In the current context of innovations in multi-core processors, where the new integration technologies are providing an increasing number of transistors inside chip, the study of techniques for increasing data throughput has great importance for the current and future multi-core and many-core processors. With the continuous demand for performance, the cache memories have been widely adopted in various types of architectural designs of computers. Nowadays, processors on the market point out for the use of shared L2 cache memory. However, it is not clear the gains and costs of these shared cache memory models. Thus, studies that address different aspects of shared cache memory have great importance in context of multi-core processors. Therefore, this dissertation aims to evaluate different shared cache memory, modeling and applying workloads on different organizations in order to obtain significant results from the performance and the influence of the shared cache memory multi-core processors. Thus, several types of shared cache memory were evaluated using traditional techniques to increase performance, such as increasing the associativity, larger line size, larger cache memory and also the increase on the cache memory hierarchy, investigating the correlation between the cache memory architecture and the workload applications. The results show the importance of integration between cache memory architecture project and memory physical design in order to obtain the best trade-off between cache memory access time and cache misses. According to the results, within evaluations, due to physical limitations and performance, organizations 1Core/L2 and 2Cores/L2 with total cache size equal to 32MB, using banks of 2 MB, line size equal to 128 bytes, represent a good choice for physical implementation in general purpose systems, obtaining a good performance in all evaluated applications without major extra costs of area occupation and power consumption. Furthermore, as a conclusion in this dissertation is shown that, for current and future integration technologies, traditional techniques for performance gain obtained with changes in the cache memory such as, increase of the memory size, increasing the associativity, larger line sizes etc.. should not lead to real performance gains if the additional latency generated by these techniques was not treated, in order to balance between the reduction of cache miss rate and the data access time.
72

Co-projeto de hardware e software de um escalonador de processos para arquiteturas multicore heterogêneas baseadas em computação reconfigurável / Hardware and software co-design of a process scheduler for heterogeneous multicore architectures based on reconfigurable computing

Maikon Adiles Fernandez Bueno 05 November 2013 (has links)
As arquiteturas multiprocessadas heterogêneas têm como objetivo principal a extração de maior desempenho da execução dos processos, por meio da utilização de núcleos apropriados às suas demandas. No entanto, a extração de maior desempenho é dependente de um mecanismo eficiente de escalonamento, capaz de identificar as demandas dos processos em tempo real e, a partir delas, designar o processador mais adequado, de acordo com seus recursos. Este trabalho tem como objetivo propor e implementar o modelo de um escalonador para arquiteturas multiprocessadas heterogêneas, baseado em software e hardware, aplicado ao sistema operacional Linux e ao processador SPARC Leon3, como prova de conceito. Nesse sentido, foram implementados monitores de desempenho dentro dos processadores, os quais identificam as demandas dos processos em tempo real. Para cada processo, sua demanda é projetada para os demais processadores da arquitetura e em seguida é realizado um balanceamento visando maximizar o desempenho total do sistema, distribuindo os processos entre processadores, de modo a diminuir o tempo total de processamento de todos os processos. O algoritmo de maximização Hungarian, utilizado no balanceamento do escalonador, foi desenvolvido em hardware, proporcionando paralelismo e maior desempenho na execução do algoritmo. O escalonador foi validado por meio da execução paralela de diversos benchmarks, resultando na diminuição dos tempos de execução em relação ao escalonador sem suporte à heterogeneidade / Heterogeneous multiprocessor architectures have as main objective the extraction of higher performance from processes through the use of appropriate cores to their demands. However, the extraction of higher performance is dependent on an efficient scheduling mechanism, able to identify in real-time the demands of processes and to designate the most appropriate processor according to their resources. This work aims at design and implementations of a model of a scheduler for heterogeneous multiprocessor architectures based on software and hardware, applied to the Linux operating system and the SPARC Leon3 processor as proof of concept. In this sense, performance monitors have been implemented within the processors, which in real-time identifies the demands of processes. For each process, its demand is projected for the other processors in the architecture and then it is performed a balancing to maximize the total system performance by distributing processes among processors. The Hungarian maximization algorithm, used in balancing scheduler was developed in hardware, providing greater parallelism and performance in the execution of the algorithm. The scheduler has been validated through the parallel execution of several benchmarks, resulting in decreased execution times compared to the scheduler without the heterogeneity support
73

Modelica PARallel benchmark suite (MPAR) - a test suite for evaluating the performance of parallel simulations of Modelica models

Hemmati Moghadam, Afshin January 2011 (has links)
Using the object-oriented, equation-based modeling language Modelica, it is possible to model and simulate computationally intensive models. To reduce the simulation time, a desirable approach is to perform the simulations on parallel multi-core platforms. For this purpose, several works have been carried out so far, the most recent one includes language enhancements with explicit parallel programing language constructs in the algorithmic parts of the Modelica language. This extension automatically generates parallel simulation code for execution on OpenCL-enabled platforms, and it has been implemented in the open-source OpenModelica environment. However, to ensure that this extension as well as future developments regarding parallel simulations of Modelica models are feasible, performing a systematic benchmarking with respect to a set of appropriate Modelica models is essential, which is the main focus of study in this thesis. In this thesis a benchmark test suite containing computationally intensive Modelica models which are relevant for parallel simulations is presented. The suite is used in this thesis as a means for evaluating the feasibility and performance measurements of the generated OpenCL code when using the new Modelica language extension. In addition, several considerations and suggestions on how the modeler can efficiently parallelize sequential models to achieve better performance on OpenCL-enabled GPUs and multi-coreCPUs are also given. The measurements have been done for both sequential and parallel implementations of the benchmark suite using the generated code from the OpenModelica compiler on different hardware configurations including single and multi-core CPUs as well as GPUs. The gained results in this thesis show that simulating Modelica models using OpenCL as a target language is very feasible. In addition, it is concluded that for models with large data sizes and great level of parallelism, it is possible to achieve considerable speedup on GPUs compared to single and multi-core CPUs.
74

REAL-TIME CHALLENGES OF VEHICULAR EMBEDDED SYSTEMS ON MULTI-CORE - A MAPPING STUDY

Iyer, Shankar Vanchesan January 2017 (has links)
The increasing complexity of vehicular embedded systems has encouraged researchers and practitioners to adopt model-driven engineering in the development of these systems. In particular, several modelling languages have been introduced for representing the vehicular software architecture and its quality attributes. Current trend in the automotive domain is to shift from single-core architectures to multi-core ones in the attempt of providing the computational power required from the next generation of vehicles, particularly autonomous ones. On the one hand, multi-core architectures introduce new real-time challenges in the development of these systems like core-interdependency. On the other hand, it is pivotal that modelling languages continue to be effective in representing the new vehicular architectures together with their novel concerns, to continue to benefit from model-based methodologies. In this thesis, we present a systematic mapping study focusing i) on the real-time challenges introduced by the adoption of multi-core architectures and ii) on the extent of the modelling support for the resolution of these challenge, in the automotive domain.
75

On the Design of Real-Time Systems on Multi-Core Platforms under Uncertainty

WANG, TIANYI 26 June 2015 (has links)
Real-time systems are computing systems that demand the assurance of not only the logical correctness of computational results but also the timing of these results. To ensure timing constraints, traditional real-time system designs usually adopt a worst-case based deterministic approach. However, such an approach is becoming out of sync with the continuous evolution of IC technology and increased complexity of real-time applications. As IC technology continues to evolve into the deep sub-micron domain, process variation causes processor performance to vary from die to die, chip to chip, and even core to core. The extensive resource sharing on multi-core platforms also significantly increases the uncertainty when executing real-time tasks. The traditional approach can only lead to extremely pessimistic, and thus, unpractical design of real-time systems. Our research seeks to address the uncertainty problem when designing real-time systems on multi-core platforms. We first attacked the uncertainty problem caused by process variation. We proposed a virtualization framework and developed techniques to optimize the system's performance under process variation. We further studied the problem on peak temperature minimization for real-time applications on multi-core platforms. Three heuristics were developed to reduce the peak temperature for real-time systems. Next, we sought to address the uncertainty problem in real-time task execution times by developing statistical real-time scheduling techniques. We studied the problem of fixed-priority real-time scheduling of implicit periodic tasks with probabilistic execution times on multi-core platforms. We further extended our research for tasks with explicit deadlines. We introduced the concept of harmonic to a more general task set, i.e. tasks with explicit deadlines, and developed new task partitioning techniques. Throughout our research, we have conducted extensive simulations to study the effectiveness and efficiency of our developed techniques. The increasing process variation and the ever-increasing scale and complexity of real-time systems both demand a paradigm shift in the design of real-time applications. Effectively dealing with the uncertainty in design of real-time applications is a challenging but also critical problem. Our research is such an effort in this endeavor, and we conclude this dissertation with discussions of potential future work.
76

Design and implementation of massively parallel fine-grained processor arrays

Walsh, Declan January 2015 (has links)
This thesis investigates the use of massively parallel fine-grained processor arrays to increase computational performance. As processors move towards multi-core processing, more energy-efficient processors can be designed by increasing the number of processor cores on a single chip rather than increasing the clock frequency of a single processor. This can be done by making processor cores less complex, but increasing the number of processor cores on a chip. Using this philosophy, a processor core can be reduced in complexity, area, and speed to form a very small processor which can still perform basic arithmetic operations. Due to the small area occupation this can be multiplied and scaled to form a large scale parallel processor array to offer a significant performance. Following this design methodology, two fine-grained parallel processor arrays are designed which aim to achieve a small area occupation with each individual processor so that a larger array can be implemented over a given area. To demonstrate scalability and performance, SIMD parallel processor array is designed for implementation on an FPGA where each processor can be implemented using four ‘slices’ of a Xilinx FPGA. With such small area utilization, a large fine-grained processor can be implemented on these FPGAs. A 32 × 32 processor array is implemented and fast processing demonstrated using image processing tasks. An event-driven MIMD parallel processor array is also designed which occupies a small amount of area and can be scaled up to form much larger arrays. The event-driven approach allows the processor to enter an idle mode when no events are occurring local to the processor, reducing power consumption. The processor can switch to operational mode when events are detected. The processor core is designed with a multi-bit data path and ALU and contains its own instruction memory making the array a multi-core processor array. With area occupation of primary concern, the processor is relatively simple and connects with its four nearest direct neighbours. A small 8 × 8 prototype chip is implemented in a 65 nm CMOS technology process which can operate at a clock frequency of 80 MHz and offer a peak performance of 5.12 GOPS which can be scaled up to larger arrays. An application of the event-driven processor array is demonstrated using a simulation model of the processor. An event-driven algorithm is demonstrated to perform distributed control of distributed manipulator simulator by separating objects based on their physical properties.
77

The Development of Hardware Multi-core Test-bed on Field Programmable Gate Array

Shivashanker, Mohan 24 March 2011 (has links)
The goal of this project is to develop a flexible multi-core hardware test-bed on field programmable gate array (FPGA) that can be used to effectively validate the theoretical research on multi-core computing, especially for the power/thermal aware computing. Based on a commercial FPGA test platform, i.e. Xilinx Virtex5 XUPV5 LX110T, we develop a homogeneous multi-core test-bed with four software cores, each of which can dynamically adjust its performance using software. We also enhance the operating system support for this test platform with the development of hardware and software primitives that are useful in dealing with inter-process communication, synchronization, and scheduling for processes on multiple cores. An application based on matrix addition and multiplication on multi-core is implemented to validate the applicability of the test bed.
78

Minimizing the unpredictability that real-time tasks suffer due to inter-core cache interference.

Satka, Zenepe, Hodžić, Hena January 2020 (has links)
Since different companies are introducing new capabilities and features on their products, the demand for computing power and performance in real-time systems is increasing. To achieve higher performance, processor's manufactures have introduced multi-core platforms. These platforms provide the possibility of executing different tasks in parallel on multiple cores. Since tasks share the same cache level, they face some interference that affects their timing predictability. This thesis is divided into two parts. The first part presents a survey on the existing solutions that others have introduced to solve the problem of cache interference that tasks face on multi-core platforms. The second part's focus is on one of the hardware-based techniques introduced by Intel Cooperation to achieve timing predictability of real-time tasks. This technique is called Cache Allocation Technology (CAT) and the main idea of it is to divide last level cache on some partitions called classes of services that will be reserved to specific cores. Since tasks of one core can only access the assigned partition of it, cache interference will be decreased and a better real-time tasks' performance will be achieved. In the end to evaluate CAT efficiency an experiment is conducted with different test cases and the obtained results show a significant difference on real-time tasks' performance when CAT is applied.
79

Multi-Core Fiber and Optical Supersymmetry: Theory and Applications

Macho Ortiz, Andrés 02 September 2019 (has links)
[ES] A día de hoy, las redes de comunicaciones de fibra óptica están alcanzando su capacidad límite debido al rápido crecimiento de la demanda de datos en la última década, generado por el auge de los teléfonos inteligentes, las tabletas, las redes sociales, la provisión de servicios en la nube, las transmisiones en streaming y las comunicaciones máquina-a-máquina. Con el fin de solventar dicho problema, se ha propuesto incrementar la capacidad límite de las redes ópticas mediante el reemplazo de la fibra óptica clásica por la fibra óptica multinúcleo (MCF, acrónimo en inglés de multi-core fiber), la cual es capaz de integrar la capacidad de varias fibras ópticas clásicas en su estructura ocupando prácticamente la misma sección transversal que éstas. Sin embargo, explotar todo el potencial de una fibra MCF requiere entender en profundidad los fenómenos electromagnéticos que aparecen en este tipo de fibras cuando guiamos luz a travésde ellas. Así pues, en la primera parte de la tesis se analizan teóricamente estos fenómenos electromagnéticos y, posteriormente, se estudia la viabilidad de la tecnología MCF en distintos tipos de redes ópticas de transporte, específicamente, en aquellas que hacen uso de transmisiones radio-sobre-fibra. Estos resultados pueden ser de gran utilidad para las futuras generaciones móviles 5G y Beyond-5G en las próximas décadas. Adicionalmente, con el fin de expandir las funcionalidades básicas de las fibras MCF, esta tesis explora nuevas estrategias de diseño de las mismas utilizando la analogía existente entre las ecuaciones que rigen la mecánica cuántica y el electromagnetismo. Con esta idea en mente, en la segunda parte de la tesis se propone diseñar una nueva clase de fibras MCF usando las matemáticas de la supersimetría, surgida en el seno de la teoría de cuerdas y de la teoría cuántica de campos como un marco teórico de trabajo que permite unificar las interacciones fundamentales de la naturaleza (la nuclear fuerte, la nuclear débil, el electromagnetismo y la gravedad). Girando en torno a esta idea surgen las fibras MCF supersimétricas, las cuales nos permiten procesar la información de los usuarios durante la propia propagación de la luz a través de ellas, reduciendo así la complejidad del procesado de datos del usuario en recepción. Finalmente, esta tesis se completa introduciendo un cambio de paradigma que permite diseñar dispositivos fotónicos disruptivos. Demostramos que la supersimetría de mecánica cuántica no relativista, propuesta como una serie de transformaciones matemáticas restringidas al dominio espacial, se puede extender también al dominio del tiempo, al menos dentro del marco de trabajo de la fotónica. Como resultado de nuestras investigaciones, demostramos que la supersimetría temporal puede convertirse en una plataforma prometedora para la fotónica integrada ya que nos permite diseñar nuevos dispositivos ópticos versátiles y ultra-compactos que pueden jugar un papel clave en los procesadores del futuro. Asimismo, con el fin de hacer los resultados principales de esta tesis doctoral lo más generales posibles, se detalla cómo poder extrapolarlos a otros campos de la física como acústica y mecánica cuántica. / [CAT] Avui en dia, les xarxes de comunicacions de fibra òptica estan aconseguint la seua capacitat límit a causa del ràpid creixement de la demanda de dades duante l'última dècada, generat per l'auge dels telèfons intel·ligents, les tablets, les xarxes socials, la provisió de servicis en la núvol, les transmissions en streaming i les comunicacions màquina-a-màquina. Per a resoldre el dit problema, s'ha proposat incrementar la capacitat límit de les xarxes òptiques per mitjà del reemplaçament de la fibra òptica clàssica per la fibra òptica multinúcleo (MCF, acrònim en anglés de multi-core fiber), la qual és capaç d'integrar la capacitat de diverses fibres òptiques clàssiques en la seua estructura ocupant pràcticament la mateixa secció transversal que estes. Tanmateix, explotar tot el potencial d'una fibra MCF requereix entendre en profunditat els fenòmens electromagnètics que apareixen en aquestes fibres quan guiem llum a través d'elles. Així, doncs, en la primera part de la tesi analitzem teòricament aquests fenòmens electromagnètics i, posteriorment, estudiem la viabilitat de la tecnologia MCF en distints tipus de xarxes òptiques de transport, específicament, en aquelles que fan ús de transmissions ràdio-sobre-fibra. Estos resultats poden ser de gran utilitat per a les futures generacions mòbils 5G i Beyond-5G en les pròximes dècades. Addicionalment, a fi d'expandir les funcionalitats bàsiques de les fibres MCF, esta tesi explora noves estratègies de disseny de les mateixes utilitzant l'analogia existent entre les equacions que regixen la mecànica quàntica i l'electromagnetisme. Amb aquesta idea en ment, en la segona part de la tesi proposem dissenyar una nova classe de fibres MCF usant les matemàtiques de la supersimetria, sorgida en el si de la teoria de cordes i de la teoria quàntica de camps com un marc teòric de treball que permet unificar les interaccions fonamentals de la natura (la nuclear forta, la nuclear feble, l'electromagnetisme i la gravetat). Al voltant d'aquesta idea sorgeixen les fibres MCF supersimètriques, les quals ens permeten processar la informació dels usuaris durant la pròpia propagació de la llum a través d'elles, reduint així la complexitat del processament de dades de l'usuari a recepció. Finalment, esta tesi es completa introduint un canvi de paradigma que permet dissenyar dispositius fotónicos disruptius. Demostrem que la supersimetria de mecànica quàntica no relativista, proposta com una sèrie de transformacions matemàtiques restringides al domini espacial, es pot estendre també al domini del temps, almenys dins del marc de treball de la fotónica. Com resultat de les nostres investigacions, demostrem que la supersimetria temporal pot convertir-se en una plataforma prometedora per a la fotònica integrada ja que ens permet dissenyar nous dispositius òptics versàtils i ultracompactes que poden jugar un paper clau en els processadors del futur. Per tal de fer els resultats principals d'aquesta tesi doctoral el més generals possibles, es detalla com poder extrapolar-los a altres camps de la física com ara la acústica i la mecànica quàntica. / [EN] To date, communication networks based on optical fibers are rapidly approaching their capacity limit as a direct consequence of the increment of the data traffic demand in the last decade due to the ubiquity of smartphones, tablets, social networks, cloud computing applications, streaming services including video and gaming, and machine-to-machine communications. In such a scenario, a new class of optical fiber which is able to integrate the capacity of several classical optical fibers approximately in the same transverse section as that of the original one, the multi-core fiber (MCF), has been recently proposed to overcome the capacity limits of current optical networks. However, the possibility of exploiting the full potential of an MCF requires to deeply understand the electromagnetic phenomena that can be observed when guiding light in this optical medium. In this vein, in the first part of this thesis, we analyze theoretically these phenomena and, next, we study the suitability of the MCF technology in optical transport networks using radio-over-fiber transmissions. These findings could be of great utility for 5G and Beyond-5G cellular technology in the next decades. In addition, the close connection between the mathematical framework of quantum mechanics and electromagnetism becomes a great opportunity to explore ground-breaking design strategies of these new fibers that allow us to expand their basic functionalities. Revolving around this idea, in the second part of this thesis we propose to design a new class of MCFs using the mathematics of supersymmetry (SUSY), emerged within the context of string and quantum field theory as a means to unify the basic interactions of nature (strong, electroweak, and gravitational interactions). Interestingly, a supersymmetric MCF will allow us, not only to propagate the light, but also to process the information of users during propagation. Finally, we conclude this thesis by introducing a paradigm shift that allows us to design disruptive optical devices. We demonstrate that the basic ideas of SUSY in non-relativistic quantum mechanics, restricted to the space domain to clarify unsolved questions about SUSY in string and quantum field theory, can also be extended to the time domain, at least within the framework of photonics. In this way, it is shown that temporal supersymmetry may serve as a key tool to judiciously design versatile and ultra-compact optical devices enabling a promising new platform for integrated photonics. For the sake of completeness, we indicate how to extrapolate the main results of this thesis to other fields of physics, such as acoustics and quantum mechanics. / Macho Ortiz, A. (2019). Multi-Core Fiber and Optical Supersymmetry: Theory and Applications [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/124964 / TESIS
80

Reliabilibity Aware Thermal Management of Real-time Multi-core Systems

Xu, Shikang 18 March 2015 (has links)
Continued scaling of CMOS technology has led to increasing working temperature of VLSI circuits. High temperature brings a greater probability of permanent errors (failure) in VLSI circuits, which is a critical threat for real-time systems. As the multi-core architecture is gaining in popularity, this research proposes an adaptive workload assignment approach for multi-core real-time systems to balance thermal stress among cores. While previously developed scheduling algorithms use temperature as the criterion, the proposed algorithm uses reliability of each core in the system to dynamically assign tasks to cores. The simulation results show that the proposed algorithm gains as large as 10% benefit in system reliability compared with commonly used static assignment while algorithms using temperature as criterion gain 4%. The reliability difference between cores, which indicates the imbalance of thermal stress on each core, is as large as 25 times smaller when proposed algorithm is applied.

Page generated in 2.2507 seconds