• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 117
  • 42
  • 26
  • 19
  • 11
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 256
  • 256
  • 64
  • 51
  • 46
  • 42
  • 33
  • 31
  • 27
  • 27
  • 25
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A coprocessor for fast searching in large databases: Associative Computing Engine

Layer, Christophe, January 2007 (has links)
Ulm, Univ., Diss., 2007.
42

Simulative Analyse und Bewertung des Performanzverhaltens von System-on-Chip-Entwürfen auf der Grundlage von abstrakten SystemC-Modellen

Braun, Axel G. January 2008 (has links)
Zugl.: Tübingen, Univ., Diss., 2008
43

Methodology for system partitioning of chip level multiprocessor systems an approach for data communication protocols

Brunnbauer, Winthir January 2006 (has links)
Zugl.: München, Techn. Univ., Diss., 2006 u.d.T.: Brunnbauer, Winthir Anton: Methodology for system partitioning of chip level multiprocessor systems / Hergestellt on demand
44

Mikrofluidisches Analysesystem zur Untersuchung von wässrigen Lösungen

Siepe, Dirk. Unknown Date (has links) (PDF)
Universiẗat, Diss., 2003--Dortmund.
45

Methode zur Auslegung mikrofluidischer Bauteile für beadbasierte Analysesysteme in der medizinischen Diagnostik

Kuhn, Claus January 1900 (has links)
Zugl.: Stuttgart, Univ., Diss., 2005
46

System-Level-Entwurfsmethodik eingebetteter Systeme /

Klaus, Stephan. January 2006 (has links)
Techn. Universiẗat, Diss., 2005--Darmstadt.
47

AN EFFICIENT BUILT-IN SELF-DIAGNOSTIC METHOD FOR NON-TRADITIONAL FAULTS OF EMBEDDED MEMORY ARRAYS

ARORA, VIKRAM January 2002 (has links)
No description available.
48

The Design of the Node for the Single Chip Message Passing (SCMP) Parallel Computer

Bucciero, Mark Benjamin 18 June 2004 (has links)
Current processor designs use additional transistors to add functionality that improves performance. These features tend to exploit instruction level parallelism. However, a point of diminishing returns has been reached in this effort. Instead, these additional transistors could be used to take advantage of thread level parallelism (TLP). This type of parallelism focuses on hundreds of instructions, rather than single instructions, executing in parallel. Additionally, as transistor sizes shrink, the wires on a chip become thinner. Fabricating a thinner wire means increasing the resistance and thus, the latency of that wire. In fact, in the near future, a signal may not reach a portion of the chip in a single clock cycle. So, in future designs, it will be important to limit the length of the wires on a chip. The SCMP parallel computer is a new architecture that is made up of small processing elements, called nodes, which are connected in a 2-D mesh with nearest neighbor connections. Nodes communicate with one another, via message passing, through a network, which uses dimension order worm-hole routing. To support TLP, each node is capable of supporting multiple threads, which execute in a non-preemptive round robin manner. The wire lengths of this system are limited since a node is only connected to its nearest neighbors. This paper focuses on the System C hardware design of the node that gets replicated across the chip. The result is a node implementation that can be used to create a hardware model of the SCMP parallel computer. / Master of Science
49

Design and quality of service of mixed criticality systems in embedded architectures based on Network-on-Chip (NoC) / Dimensionnement et Qualité de Service pour les systèmes à criticité mixte dans les architectures embarquées à base de Network on Chip (NoC)

Papastefanakis, Ermis 28 November 2017 (has links)
L'évolution de Systems-on-Chip (SoCs) est rapide et le nombre des processeurs augmente conduisant à la transition des les plates-formes Multi-core vers les Manycore. Dans telles plates-formes, l'architecture d'interconnexion a également décalé des bus traditionnels vers les Réseaux sur puce (NoC) afin de faire face à la mise en échelle. Les NoC permettent aux processeurs d'échanger des informations avec la mémoire et les périphériques lors de l'exécution d'une tâche et d'effectuer plusieurs communications en parallèle. Les plates-formes basées sur un NoC sont aussi présentes dans des systèmes embarqués, caractérisés par des exigences comme la prédictibilité, la sécurité et la criticité mixte. Afin de fournir telles fonctionnalités dans les plates-formes commerciales existantes, il faut prendre en considération le NoC qui est un élément clé ayant un impact important sur les performances d'un SoC. Une tâche échange des informations à travers du NoC et par conséquent, son temps d'exécution dépend du temps de transmission des flux qu'elle génère. En calculant le temps de transmission de pire cas (WCTT) des flux dans le NoC, une étape est faite vers le calcul du temps d'exécution de pire cas (WCET) d'une tâche. Ceci contribue à la prédictibilité globale du système. De plus, en prenant en compte les politiques d'arbitrage dans le NoC, il est possible de fournir des garanties de sécurité contre des tâches compromises qui pourraient essayer de saturer les ressources du système (attaque DoS). Dans les systèmes critiques de sécurité, une distinction des tâches par rapport à leur niveau de criticité, permet aux tâches de criticité mixte de coexister et d'exécuter en harmonie. De plus, ça permet aux tâches critiques de maintenir leurs temps d'exécution au prix de tâches de faible criticité qui seront ralenties ou arrêtées. Cette thèse vise à fournir des méthodes et des mécanismes dans le but de contribuer aux axes de prédictibilité, de sécurité et de criticité mixte dans les architectures Manycore basées sur Noc. En outre, l'incitation consiste à relever conjointement les défis dans ces trois axes en tenant compte de leur impact mutuel. Chaque axe a été étudié individuellement, mais très peu de recherche prend en compte leur interdépendance. Cette fusion des aspects est de plus en plus intrinsèque dans des domaines tels que Internet-of-Things, Cyber-Physical Systems (CPS), véhicules connectés et autonomes qui gagnent de l'élan. La raison en est leur haut degré de connectivité qui crée une grande surface d'exposition ainsi que leur présence croissante qui rend l'impact des attaques sévère et visible. Les contributions de cette thèse consistent en une méthode pour fournir une prédictibilité aux flux dans le NoC, un mécanisme pour la sécurité du NoC et une boîte à outils pour la génération de trafic utilisée pour l'analyse comparative. La première contribution est une adaptation de l'approche de la trajectoire traditionnellement utilisée dans les réseaux avioniques (AFDX) pour calculer le WCET. Dans cette thèse, nous identifions les différences et les similitudes dans l'architecture NoC et modifions l'approche de la trajectoire afin de calculer le WCTT des flux NoC. La deuxième contribution est un mécanisme qui permet de détecter les attaques de DoS et d'atténuer leur impact dans un ensemble des flux de criticité mixte. Plus précisément, un mécanisme surveille le NoC et lors de la détection d'un comportement anormal, un deuxième mécanisme d'atténuation s'active. Ce dernier applique des limites de trafic à la source et restreint le taux auquel le NoC est occupé. Cela atténuera l'impact de l'attaque, garantissant la disponibilité des ressources pour les tâches de haute criticité. Finalement NTGEN, est un outil qui peut générer automatiquement des jeux des flux aléatoires mais qui provoquent une occupation NoC prédéterminée. Ces ensembles sont ensuite injectés dans le NoC et les informations sont collectées en fonction de la latence / The evolution of Systems-on-Chip (SoCs) is rapid and the number of processors has increased transitioning from Multi-core to Manycore platforms. In such platforms, the interconnect architecture has also shifted from traditional buses to Networks-on-Chip (NoC) in order to cope with scalability. NoCs allow the processors to exchange information with memory and peripherals during task execution and enable multiple communications in parallel. NoC-based platforms are also present in embedded systems, characterized by requirements like predictability, security and mixed-criticality. In order to enable such features in existing commercial platforms it is necessary to take into consideration the NoC which is a key element with an important impact to a SoC's performance. A task exchanges information through the NoC and as a result, its execution time depends on the transmission time of the flows it generates. By calculating the Worst Case Transmission Time (WCTT) of flows in the NoC, a step is made towards the calculation of the Worst Case Execution Time (WCET) of a task. This contributes to the overall predictability of the system. Similarly by leveraging arbitration and traffic policies in the NoC it is possible to provide security guarantees against compromised tasks that might try to saturate the system's resources (DoS attack). In safety critical systems, a distinction of tasks in relation to their criticality level, allows tasks of mixed criticality to co-exist and execute in harmony. In addtition, it allows critical tasks to maintain their execution times at the cost of tasks of lower criticality that will be either slowed down or stopped. This thesis aims to provide methods and mechanisms with the objective to contribute in the axes of predictability, security and mixed criticality in NoC-based Manycore architectures. In addition, the incentive is to jointly address the challenges in these three axes taking into account their mutual impact. Each axis has been researched individually, but very little research takes under consideration their interdependence. This fusion of aspects is becoming more and more intrinsic in fields like the Internet-of-Things, Cyber-Physical Systems (CPSs), connected and autonomous vehicles which are gaining momentum. The reason being their high degree of connectivity which is creates great exposure as well as their increasing presence which makes attacks severe and visible. The contributions of this thesis consist of a method to provide predictability to a set of flows in the NoC, a mechanism to provide security properties to the NoC and a toolkit for traffic generation used for benchmarking. The first contribution is an adaptation of the trajectory approach traditionally used in avionics networks (AFDX) to calculate WCET. In this thesis, we identify the differences and similarities in NoC architecture and modify the trajectory approach in order to calculate the WCTT of NoC flows. The second contribution is a mechanism that detects DoS attacks and mitigates their impact in a mixed criticality set of flows. More specifically, a monitor mechanism will detect abnormal behavior, and activate a mitigation mechanism. The latter, will apply traffic shaping at the source and restrict the rate at which the NoC is occupied. This will limit the impact of the attack, guaranteeing resource availability for high criticality tasks. Finally NTGEN, is a toolkit that can automatically generate random sets of flows that result to a predetermined NoC occupancy. These sets are then injected in the NoC and information is collected related to latency
50

Análise da influência do uso de domínios de parâmetros sobre a eficiência da verificação funcional baseada em estimulação aleatória. / Analysis of the influence of using parameter domains on ramdom-stimulation-based functional verification efficiency.

Castro Marquez, Carlos Ivan 10 February 2009 (has links)
Uma das maiores restrições que existe atualmente no fluxo de projeto de CIs é a necessidade de um ciclo menor de desenvolvimento. Devido às grandes dimensões dos sistemas atuais, é muito provável encontrar no projeto de blocos IP, erros ou bugs originados na passagem de uma dada especificação inicial para seus correspondentes modelos de descrição de hardware. Isto faz com que seja necessário verificar tais modelos para garantir aplicações cem por cento funcionais. Uma das técnicas de verificação que tem adquirido bastante popularidade recentemente é a verificação funcional, uma vez que é uma alternativa que ajuda a manter baixos custos de validação dos modelos HDL ao longo do projeto completo do circuito. Na verificação funcional, que está baseada em ambientes de simulação, a funcionalidade completa (ou relevante) do modelo é explorada, aplicando-se casos de teste, um após o outro. Isto permite examinar o modelo em todas as seqüências e combinações de entradas desejadas. Na verificação funcional, existe a possibilidade de simular o modelo estimulando-o com casos de teste aleatórios, o qual ajuda a cobrir um amplo número de estados. Para facilitar a aplicação de estímulos em simulação de circuitos, é comum que espaços definidos por parâmetros de entrada sejam limitados em sua abrangência e agrupados de tal forma que subespaços sejam formados. No desenvolvimento de testbenches, os geradores de estímulos aleatórios podem ser criados de forma a conter subespaços que se sobrepõem (resultando em estímulos redundantes) ou subespaços que contenham condições que não sejam de interesse (resultando em estímulos inválidos). É possível eliminar ou diminuir, os casos de teste redundantes e inválidos através da aplicação de metodologias de modificação do espaço de estímulos de entrada, e assim, diminuir o tempo requerido para completar a simulação de modelos HDL. No presente trabalho, é realizada uma análise da aplicação da técnica de organização do espaço de entrada através de domínios de parâmetros do IP, e uma metodologia é desenvolvida para tal, incluindo-se, aí, uma ferramenta de codificação automática de geradores de estímulos aleatórios em linguagem SyatemC: o GET_PRG. Resultados com a aplicação da metodologia é comparada a casos de aplicação de estímulos aleatórios gerados a partir de um espaço de estímulos de entrada sem modificações.Como esperado, o número de casos de teste redundantes e inválidos aplicados aos testbenches foi sempre maior para o caso de estimulação aleatória a partir do espaço de estímulos de entrada completo com um tempo de execução mais longo. / One of the strongest restrictions that exist throughout ICs design flow is the need for shorter development cycles. This, along with the constant demand for more functionalities, has been the main cause for the appearance of the so-called System-on-Chip (SOC) architectures, consisting of systems that contain dozens of reusable hardware blocks (Intellectual Properties, or IPs). The increasing complexity makes it necessary to thoroughly verify such models in order to guarantee 100% functional applications. Among the current verification techniques, functional verification has received important attention, since it represents an alternative that keeps HDL validation costs low throughout the circuits design cycle. Functional verification is based in testbenches, and it works by exploring the whole (or relevant) models functionality, applying test cases in a sequential fashion. This allows the testing of the model in all desired input sequences and combinations. There are different techniques concerning testbench design, being the random stimulation an important approach, by which a huge number of test cases can be automatically created. In order to ease the stimuli application in circuit simulation, it is common to limit the range of the space defined by input parameters and to group such restricted parameters in sub-spaces. In testbench development, it may occur the creation of random stimuli generators containing overlapping sub-spaces (resulting in redundant stimuli) or sub-spaces containing conditions of no interest (resulting in invalid stimuli). It is possible to eliminate, or at least reduce redundant and invalid test cases by modifying the input stimuli space, thus, diminishing the time required to complete the HDL models simulation. In this work, the application of a technique aimed to organize the input stimuli space, by means of IP parameter domains, is analyzed. A verification methodology based on that is developed, including a tool for automatic coding of random stimuli generators using SystemC: GET_PRG. Results on applying such a methodology are compared to cases where test vectors from the complete verification space are generated. As expected, the number of redundant test cases applied to the testbenches was always greater for the case of random stimulation on the whole (unreduced, unorganized) input stimuli space, with a larger testbench execution time.

Page generated in 0.047 seconds