• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 19
  • 9
  • 8
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 40
  • 39
  • 21
  • 19
  • 19
  • 18
  • 17
  • 17
  • 16
  • 15
  • 14
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Hypercube machine implementation of low-level vision algorithms

Lim, Choon Kee January 1988 (has links)
No description available.
22

Parallel thinning algorithms and their implementation on hypercube machine

Xu, Yi-Chang January 1991 (has links)
No description available.
23

The Folded Hypercube ATM Switches

Park, Jahng Sun 03 October 2001 (has links)
Over the past few years, many high performance asynchronous transfer mode (ATM) switches have been proposed. The majority of these switches have high performance but also high hardware complexity. Therefore, there is a need for switch designs with low complexity and high performance. This research proposes three new ATM switches based on the folded hypercube network (FHC). The performance of the three architectures are studied using a network model and simulation. The major performance parameters measured are the cell loss rate and cell delay time through the switch under uniform, normal, and bursty traffic patterns. To guarantee faster switching of time-sensitive cells, the routing algorithm of the three switches uses a priority scheme that gives higher precedence to the time-sensitive cells. Also, an output buffer controller is designed to manage the buffers in a fair manner. The three proposed switch architectures have lower complexity while providing equivalent or better switching performance compared to other more complex ATM switches described in the literature. This research shows a new approach to designing ATM switches by using the FHC as the switching fabric for the first time instead of using the crossbar, multi-path, or Banyan-based switching fabrics. / Ph. D.
24

Exchanged Crossed Cube: A Novel Interconnection Network for Parallel Computation

Li, K., Mu, Y., Li, K., Min, Geyong January 2013 (has links)
The topology of interconnection networks plays a key role in the performance of parallel computing systems. A new interconnection network called exchanged crossed cube (ECQ) is proposed and analyzed in this paper. We prove that ECQ has the better properties than other variations of the basic hypercube in terms of the smaller diameter, fewer links, and lower cost factor, which indicates the reduced communication overhead, lower hardware cost, and more balanced consideration among performance and cost. Furthermore, it maintains several attractive advantages including recursive structure, high partitionability, and strong connectivity. Furthermore, the optimal routing and broadcasting algorithms are proposed for this new network topology.
25

Answers to EMS Queries About Dynamic Deployment: Fractile Performance, Cost, and Management

Aljalahema, Rashid Shaheen January 2015 (has links)
Dynamic deployment is an Emergency Medical Services (EMS) ambulance management strategy where 911 call demand coverage is maximized continuously through time. Unlike static deployment where dispatched ambulances leave a coverage gap until they return to their home-base after service, dynamic deployment redeploys idle ambulances to different locations if that leads to an increase in demand coverage. The purpose of this dissertation was to study dynamic deployment as a viable, beneficial, and cost-effective methodology in managing EMS ambulances and crews. The literature, while rich in studies on static deployment, was lacking when it came to ambulance management strategies like dynamic deployment. Through a discrete-event simulation model, hypothetical EMS systems were simulated under dynamic and static deployment with different demand patterns, demand loads, and system sizes. Dynamic deployment was found to be as good, or often better, in emergency response metrics than static deployment. When EMS systems want to meet a certain response goal, dynamic deployment may enable them to achieve that performance with fewer vehicles than static deployment. While savings in number of vehicles translate to substantial savings in crew wages and vehicular purchasing costs, dynamic deployment may increase operating costs per vehicle because of the extra mileage involved in redeployments. Many EMS systems with average vehicular utilizations of 40% to 50% may find, however, that dynamic deployment may be both cost-effective and beneficial in improving response performance. Different redeployment strategies were studied to address the added travel costs of dynamic deployment and a min-sum assignment model was found to decrease redeployment travel the most without impacting response performance. Finally, a procedure and a mathematical model were developed to route vehicles intelligently such that demand coverage is maximized throughout the redeployment process.
26

A Theoretical Network Model and the Incremental Hypercube-Based Networks

Mao, Ai-sheng 05 1900 (has links)
The study of multicomputer interconnection networks is an important area of research in parallel processing. We introduce vertex-symmetric Hamming-group graphs as a model to design a wide variety of network topologies including the hypercube network.
27

Simulations and applications of large-scale k-determinantal point processes / Simulations et applications des k-processus ponctuels déterminantaux

Wehbe, Diala 03 April 2019 (has links)
Avec la croissance exponentielle de la quantité de données, l’échantillonnage est une méthode pertinente pour étudier les populations. Parfois, nous avons besoin d’échantillonner un grand nombre d’objets d’une part pour exclure la possibilité d’un manque d’informations clés et d’autre part pour générer des résultats plus précis. Le problème réside dans le fait que l’échantillonnage d’un trop grand nombre d’individus peut constituer une perte de temps.Dans cette thèse, notre objectif est de chercher à établir des ponts entre la statistique et le k-processus ponctuel déterminantal(k-DPP) qui est défini via un noyau. Nous proposons trois projets complémentaires pour l’échantillonnage de grands ensembles de données en nous basant sur les k-DPPs. Le but est de sélectionner des ensembles variés qui couvrent un ensemble d’objets beaucoup plus grand en temps polynomial. Cela peut être réalisé en construisant différentes chaînes de Markov où les k-DPPs sont les lois stationnaires.Le premier projet consiste à appliquer les processus déterminantaux à la sélection d’espèces diverses dans un ensemble d’espèces décrites par un arbre phylogénétique. En définissant le noyau du k-DPP comme un noyau d’intersection, les résultats fournissent une borne polynomiale sur le temps de mélange qui dépend de la hauteur de l’arbre phylogénétique.Le second projet vise à utiliser le k-DPP dans un problème d’échantillonnage de sommets sur un graphe connecté de grande taille. La pseudo-inverse de la matrice Laplacienne normalisée est choisie d’étudier la vitesse de convergence de la chaîne de Markov créée pour l’échantillonnage de la loi stationnaire k-DPP. Le temps de mélange résultant est borné sous certaines conditions sur les valeurs propres de la matrice Laplacienne.Le troisième sujet porte sur l’utilisation des k-DPPs dans la planification d’expérience avec comme objets d’étude plus spécifiques les hypercubes latins d’ordre n et de dimension d. La clé est de trouver un noyau positif qui préserve le contrainte de ce plan c’est-à-dire qui préserve le fait que chaque point se trouve exactement une fois dans chaque hyperplan. Ensuite, en créant une nouvelle chaîne de Markov dont le n-DPP est sa loi stationnaire, nous déterminons le nombre d’étapes nécessaires pour construire un hypercube latin d’ordre n selon le n-DPP. / With the exponentially growing amount of data, sampling remains the most relevant method to learn about populations. Sometimes, larger sample size is needed to generate more precise results and to exclude the possibility of missing key information. The problem lies in the fact that sampling large number may be a principal reason of wasting time.In this thesis, our aim is to build bridges between applications of statistics and k-Determinantal Point Process(k-DPP) which is defined through a matrix kernel. We have proposed different applications for sampling large data sets basing on k-DPP, which is a conditional DPP that models only sets of cardinality k. The goal is to select diverse sets that cover a much greater set of objects in polynomial time. This can be achieved by constructing different Markov chains which have the k-DPPs as their stationary distribution.The first application consists in sampling a subset of species in a phylogenetic tree by avoiding redundancy. By defining the k-DPP via an intersection kernel, the results provide a fast mixing sampler for k-DPP, for which a polynomial bound on the mixing time is presented and depends on the height of the phylogenetic tree.The second application aims to clarify how k-DPPs offer a powerful approach to find a diverse subset of nodes in large connected graph which authorizes getting an outline of different types of information related to the ground set. A polynomial bound on the mixing time of the proposed Markov chain is given where the kernel used here is the Moore-Penrose pseudo-inverse of the normalized Laplacian matrix. The resulting mixing time is attained under certain conditions on the eigenvalues of the Laplacian matrix. The third one purposes to use the fixed cardinality DPP in experimental designs as a tool to study a Latin Hypercube Sampling(LHS) of order n. The key is to propose a DPP kernel that establishes the negative correlations between the selected points and preserve the constraint of the design which is strictly confirmed by the occurrence of each point exactly once in each hyperplane. Then by creating a new Markov chain which has n-DPP as its stationary distribution, we determine the number of steps required to build a LHS with accordance to n-DPP.
28

Embeddings in parallel systems

Kwon, Younggeun 04 May 1993 (has links)
Graduation date: 1993
29

Program allocation for hypercube based dataflow systems

Freytag, Vincent R. 18 March 1993 (has links)
The dataflow model of computation differs from the traditional control-flow model of computation in that it does not utilize a program counter to sequence instructions in a program. Instead, the execution of instructions is based solely on the availability of their operands. Thus, an instruction is executed in a dataflow computer when all of its operands are available. This asynchronous nature of the dataflow model of computation allows the exploitation of fine-grain parallelism inherent in programs. Although the dataflow model of computation exploits parallelism, the problem of optimally allocating a program to processors belongs to the class of NP-complete problems. Therefore, one of the major issues facing designers of dataflow multiprocessors is the proper allocation of programs to processors. The problem of program allocation lies in maximizing parallelism while minimizing interprocessor communication costs. The culmination of research in the area of program allocation has produced the proposed method called the Balanced Layered Allocation Scheme that utilizes heuristic rules to strike a balance between computation time and communication costs in dataflow multiprocessors. Specifically, the proposed allocation scheme utilizes Critical Path and Longest Directed Path heuristics when allocating instructions to processors. Simulation studies indicate that the proposed scheme is effective in reducing the overall execution time of a program by considering the effects of communication costs on computation times. / Graduation date: 1993
30

Signal mapping designs for bit-interleaved coded modulation with iterative decoding (BICM-ID)

Tran, Nghi Huu 22 December 2004
Bit-interleaved coded modulation with iterative decoding (BICM-ID)is a spectral efficient coded modulation technique to improve the performance of digital communication systems. It has been widely known that for fixed signal constellation, interleaver and error control code, signal mapping plays an important role in determining the error performance of a BICM-ID system. This thesis concentrates on signal mapping designs for BICM-ID systems. To this end, the distance criteria to find the best mapping in terms of the asymptotic performance are first analytically derived for different channel models. Such criteria are then used to find good mappings for various two-dimensional 8-ary constellations. The usefulness of the proposed mappings of 8-ary constellations is verified by both the error floor bound and simulation results. Moreover, new mappings are also proposed for BICM-ID systems employing the quadrature phase shift keying (QPSK) constellation. The new mappings are obtained by considering many QPSK symbols over a multiple symbol interval, which essentially creates hypercube constellations. Analytical and simulation results show that the use of the proposed mappings together with very simple convolutional codes can offer significant coding gains over the conventional BICM-ID systems for all the channel models considered. Such coding gains are achieved without any bandwidth nor power expansion and with a very small increase in the system complexity.

Page generated in 0.039 seconds