• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 431
  • 125
  • 111
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 882
  • 882
  • 526
  • 525
  • 386
  • 147
  • 133
  • 124
  • 115
  • 94
  • 89
  • 89
  • 84
  • 83
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Mechanisms for reliable message delivery in pipelined interconnection networks

Dao, Binh Vien 05 1900 (has links)
No description available.
152

Elements of an applications-driven optical interconnect technology modeling framework for ultracompact massively parallel processing systems

Cruz-Rivera, Jose L. 05 1900 (has links)
No description available.
153

A unified approach to optimal multiprocessor implementations from non-parallel algorithm specifications

Lee, Sae Hun 12 1900 (has links)
No description available.
154

Parallel processing approach for crash dynamic analysis

Chiang, K. (Kuoning) 08 1900 (has links)
No description available.
155

Interactive parallel simulation environments

Hybinette, Maria 05 1900 (has links)
No description available.
156

Parallel parsing of context-free languages on an array of processors

Langlois, Laurent Chevalier January 1988 (has links)
Kosaraju [Kosaraju 69] and independently ten years later, Guibas, Kung and Thompson [Guibas 79] devised an algorithm (K-GKT) for solving on an array of processors a class of dynamic programming problems of which general context-free language (CFL) recognition is a member. I introduce an extension to K-GKT which allows parsing as well as recognition. The basic idea of the extension is to add counters to the processors. These act as pointers to other processors. The extended algorithm consists of three phases which I call the recognition phase, the marking phase and the parse output phase. I first consider the case of unambiguous grammars. I show that in that case, the algorithm has O(n2log n) space complexity and a linear time complexity. To obtain these results I rely on a counter implementation that allows the execution in constant time of each of the operations: set to zero, test if zero, increment by 1 and decrement by 1. I provide a proof of correctness of this implementation. I introduce the concept of efficient grammars. One factor in the multiplicative constant hidden behind the O(n2log n) space complexity measure for the algorithm is related to the number of non-terminals in the (unambiguous) grammar used. I say that a grammar is k-efficient if it allows the processors to store not more than k pointer pairs. I call a 1-efficient grammar an efficient grammar. I show that two properties that I call nt-disjunction and rhsdasjunction together with unambiguity are sufficient but not necessary conditions for grammar efficiency. I also show that unambiguity itself is not a necessary condition for efficiency. I then consider the case of ambiguous grammars. I present two methods for outputting multiple parses. Both output each parse in linear time. One method has O(n3log n) space complexity while the other has O(n2log n) space complexity. I then address the issue of problem decomposition. I show how part of my extension can be adapted, using a standard technique, to process inputs that would be too large for an array of some fixed size. I then discuss briefly some issues related to implementation. I report on an actual implementation on the I.C.L. DAP. Finally, I show how another systolic CFL parsing algorithm, by Chang, Ibarra and Palis [Chang 87], can be generalized to output parses in preorder and inorder.
157

Parallelization of the Hartley transform

Liu, Mingjun January 1992 (has links)
No description available.
158

Neural net models of word representation : a connectionist approach to word meaning and lexical relations

Neff, Kathryn Joan Eggers January 1991 (has links)
This study examines the use of the neural net paradigm as a modeling tool to represent word meanings. The neural net paradigm, also called "connectionism" and "parallel distributed processing," provides a new metaphor and vocabulary for representing the structure of the mental lexicon. As a research method applied to the componential analysis of word meanings, the neural net approach has one primary advantage over the traditional introspective method: freedom from the investigator's personal biases.The connectionist method is illustrated in this thesis with an extensive examination of the meanings of the words "cup" and "mug." These words have been studied previously by Labov (1973), Wierzbicka (1985), Andersen (1975), and Kempton (1978), using very different methods.The neural net models developed in this study are based on empirical data acquired through interviews with nine informants who classified 37 objects, 37 photographs, and 37 line drawings as "cups," "mugs," or "neither." These responses were combined with a data file representing the coded attributes of each object, to construct neural net models which reflect each informant's classification process.In the neural net models, the "cup" and "mug" features are interconnected with positive and negative weights that represent the association strengths of the features. When the connection weights are set so that they reflect the informants' responses, the neural net models can account for the extreme discrepancies in object-naming among informants, and the models can also account for the inconsistent classifications of each individual informant with respect to the mode of presentation (drawing, photograph, or actual object). Further, the neural net modelscan predict classifications for novel objects with an accuracy varying from 82% to 100%.By examining the connection weight patterns within the neural net model, it is possible to discover the "cup" and "mug" features which are most salient for each informant, and for the informants collectively. This analysis shows that each informant has acquired internal meanings for the words "cup" and "mug" which are unique to the individual, although there is considerable overlap with respect to the most salient features. / Department of English
159

Una metodología de detección de fallos transitorios en aplicaciones paralelas sobre cluster de multicores

Montezanti, Diego Miguel January 2014 (has links)
El aumento en la escala de integración, con el objetivo de mejorar las prestaciones en los procesadores actuales, sumado al crecimiento de los sistemas de cómputo, han producido que la fiabilidad se haya vuelto un aspecto relevante. En particular, la creciente vulnerabilidad a los fallos transitorios se ha vuelto crítica, a causa de la capacidad de estos fallos de corromper los resultados de las aplicaciones. Históricamente, los fallos transitorios han sido una preocupación en el diseño de sistemas críticos, como sistemas de vuelo o servidores de alta disponibilidad, en los que las consecuencias del fallo pueden resultar desastrosas. Pese a ser fallos temporarios, tienen la capacidad de alterar el comportamiento del sistema de cómputo. A partir del año 2000 se han vuelto más frecuentes los reportes de desperfectos significativos en distintas supercomputadoras, debidos a los fallos transitorios. El impacto de los fallos transitorios se vuelve más relevante en el contexto del Cómputo de Altas Prestaciones (HPC). Aun cuando el tiempo medio entre fallos (MTBF) es del orden de 2 años para un procesador comercial, en el caso de una supercomputadora con cientos o miles de procesadores que cooperan para resolver una tarea, el MTBF disminuye cuanto mayor es la cantidad de procesadores. Esta situación se agrava con el advenimiento de los procesadores multicore y las arquitecturas de cluster de multicores, que incorporan un alto grado de paralelismo a nivel de hardware. La incidencia de los fallos transitorios es aún mayor en el caso de aplicaciones de gran duración, que manejan elevados volúmenes de datos, dado el alto costo (en términos de tiempo y utilización de recursos) que implica volver a lanzar la ejecución desde el comienzo, en caso de obtener resulta-dos incorrectos debido a la ocurrencia del fallo. Estos factores justifican la necesidad de desarrollar estrategias específicas para mejorar la con-fiabilidad en sistemas de HPC; en este sentido, es crucial poder detectar los fallos llamados silenciosos, que alteran los resultados de las aplicaciones pero que no son interceptados por el sistema operativo ni ninguna otra capa de software del sistema, por lo que no causan la finalización abrupta de la ejecución. En este contexto, el trabajo analizará una metodología distribuida basada en software, diseñada para aplicaciones paralelas científicas que utilizan paso de mensajes, capaz de detectar fallos transitorios mediante la validación de contenidos de los mensajes que se van a enviar a otro proceso de la aplicación. Esta metodología, previamente publicada, intenta abordar un problema no cubierto por las propuestas existentes, detectando los fallos transitorios que permiten la continuidad de la ejecución pero que son capaces de corromper los resultados finales, mejorando la confiabilidad del sistema y disminuyendo el tiempo luego del cual se puede relanzar la aplicación, lo cual es especialmente útil en ejecuciones prolongadas.
160

The modelling of temporal properties in a process algebra framework /

Cowie, Alexander James Unknown Date (has links)
Thesis (PhD)--University of South Australia, 1999

Page generated in 0.0855 seconds