• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 28
  • 20
  • 19
  • 10
  • 6
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 285
  • 285
  • 106
  • 68
  • 46
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • 33
  • 32
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

A evolução da regulamentação do recall previsto no Código de Defesa do Consumidor

Sanchez, Andrea da Silva Souza 24 March 2015 (has links)
Made available in DSpace on 2016-04-26T20:23:38Z (GMT). No. of bitstreams: 1 Andrea da Silva Souza Sanchez.pdf: 1371123 bytes, checksum: e69fd6c16ebf2a67e71c0cfc9c4adc87 (MD5) Previous issue date: 2015-03-24 / Brazilian society when following the increase of recall campaigns has associated the raising of such introduction in the market place of products and services as interpreted as a high level of danger and hazardousness to the health and well being safety of consumers. A better comprehension of such issue, has this work getting started as a brief rescue of Law history on health and safety towards the end consumer, inserted in social Law and its fundaments on Law in the country, through the identification of participant responsible for its ruling, monitoring and control measures on the Recall itself. We are focused in limiting the issue specially in understanding, acquiring a concept and its improvement on ruling recall campaigns towards Consumer Protection Code, focusing on the duty to inform suppliers of the articles 8th to 10 on Law nº. 8.078/90, which deals with protection towards health and consumer´s safety. Aiming to demonstrate how current public policies are towards it in this country, the improvement of such Policies will enable growth of campaigns, rule lists and action under the responsibility of the increment of the institute, always recognizing how it is done, specially under the comparing task of the government actions in Brazil and in other countries, by investing high technology in scientific research granting a higher protection possibility to consumers / A sociedade brasileira, ao acompanhar o aumento do número das campanhas de recall, por vezes o tem associado ao aumento da introdução no mercado de consumo de produtos e serviços como alto grau de nocividade e periculosidade à saúde e segurança dos consumidores. Para melhor compreensão da relevância da matéria, o trabalho se inicia com um breve resgate histórico do direito a saúde e segurança do consumidor, inseridos como direito social e fundamental no ordenamento jurídico do País, passando pela identificação dos atores responsáveis pela normatização, monitoramento e fiscalização do recall. Preocupamo-nos a delimitar o tema exclusivamente a entender, conceituar e analisar a evolução de sua regulamentação das campanhas de recall disciplinadas pelo Código de Defesa do Consumidor, dando ênfase ao dever de informar dos fornecedores disposto nos artigos 8º ao 10 da Lei nº. 8.078/90, que tratam da proteção à saúde e segurança dos consumidores. Com o intuito de demonstrar quão recentes são as políticas públicas de proteção à saúde e segurança do consumidor no País, cujo aprimoramento possibilitará um crescimento ainda maior dão quantidade de campanhas, foram listados regramentos e ações responsáveis pelo incremento do instituto sem, no entanto deixar de reconhecer o quanto há por ser feito, especialmente pela comparação da evolução das atuações governamentais do Brasil com a de outros países, que contam com investimento em alta tecnologia e pesquisa científica que garantem maior grau de proteção aos consumidores
172

High-Level Synthesis Framework for Crosstalk Minimization in VLSI ASICs

Sankaran, Hariharan 31 October 2008 (has links)
Capacitive crosstalk noise can affect the delay of a switching signal or induce a glitch on a static signal causing timing violations or chip failure. Crosstalk noise depends on coupling parasitics, driver strength, signal timing characteristics, and signal transition patterns. Layout level crosstalk analysis techniques are generally pessimistic and computationally expensive for large designs due to lack of design flexibility at lower-levels of design hierarchy. The architectural decisions such as type of interconnect architecture, number of storage and execution units, network of communicating units, data bus width, etc., have a major impact on the quality of design attributes such as area, speed, power, and noise. To address all these concerns, we propose a high-level synthesis framework to optimize for worst-case crosstalk patterns on coupled nets, a floorplan driven high-level synthesis framework to minimize coupling capacitance, and an on-chip technique to dynamically detect and eliminate worst-case crosstalk pattern on bus-based macro-cell designs. Due to Miller coupling effect, the switching activity pattern on adjacent nets may increase the effective capacitance seen by a victim net and thereby it may cause a worst-case signal delay on the victim net. However, signal activity pattern on coupled nets are dependent on data correlations which in turn depend on resource sharing. The resource sharing in turn depends on scheduling, allocation, and binding during high-level synthesis flow. Therefore, we propose a Simulated Annealing (SA) based design space exploration of HLS design subspace, bus line re-ordering, and encoding subspaces to optimize for worst-case crosstalk pattern in bus-based macro-cell designs. We demonstrate that the proposed framework will aid layout level techniques in eliminating false positive violations. We also propose an SA based algorithm to explore floorplan and HLS subspaces to optimize coupling capacitances in bus-based macro-cell designs. We have integrated an RTL floorplanner in HLS flow to estimate coupling capacitances between bus lines. Crosstalk analysis using Cadence Celtic shows that the designs generated by the proposed framework results in less number of crosstalk violations compared to designs generated through traditional ASIC design flow. We also propose an on-chip crosstalk detection and elimination technique that dynamically detects and eliminates worst-case crosstalk pattern with minimum area penalty compared to other layout level techniques reported in the literature.
173

High-Level Parallel Programming of Computation-Intensive Algorithms on Fine-Grained Architecture

Cheema, Fahad Islam January 2009 (has links)
<p>Computation-intensive algorithms require a high level of parallelism and programmability, which </p><p>make them good candidate for hardware acceleration using fine-grained processor arrays. Using </p><p>Hardware Description Language (HDL), it is very difficult to design and manage fine-grained </p><p>processing units and therefore High-Level Language (HLL) is a preferred alternative. </p><p> </p><p>This thesis analyzes HLL programming of fine-grained architecture in terms of achieved </p><p>performance and resource consumption. In a case study, highly computation-intensive algorithms </p><p>(interpolation kernels) are implemented on fine-grained architecture (FPGA) using a high-level </p><p>language (Mitrion-C). Mitrion Virtual Processor (MVP) is extracted as an application-specific </p><p>fine-grain processor array, and the Mitrion development environment translates high-level design </p><p>to hardware description (HDL). </p><p> </p><p>Performance requirements, parallelism possibilities/limitations and resource requirement for </p><p>parallelism vary from algorithm to algorithm as well as by hardware platform. By considering </p><p>parallelism at different levels, we can adjust the parallelism according to available hardware </p><p>resources and can achieve better adjustment of different tradeoffs like gates-performance and </p><p>memory-performance tradeoffs. This thesis proposes different design approaches to adjust </p><p>parallelism at different design levels. For interpolation kernels, different parallelism levels and </p><p>design variants are proposed, which can be mixed to get a well-tuned application and resource </p><p>specific design.</p>
174

Exploration de l'espace des architectures pour des systèmes de traitement d'image, analyse faite sur des blocs fondamentaux de la rétine numérique

Corvino, Rosilde 14 October 2009 (has links) (PDF)
Dans le cadre de la synthèse de haut niveau (SHN), qui permet d'extraire un modèle structural à partir d'un modèle algorithmique, nous proposons des solutions pour opti- miser l'accès et le transfert de données du matériel cible. Une méthodologie d'exploration de l'espace des architectures mémoire possibles a été mise au point. Cette méthodologie trouve un compromis entre la quantité de mémoire interne utilisée et les performances temporelles du matériel généré. Deux niveau d'optimisation existe : 1. Une optimisation architecturale, qui consiste à créer une hiérarchie mémoire, 2. Une optimisation algorithmique, qui consiste à partitionner la totalité des données manipulées pour stocker en interne seulement celles qui sont utiles dans l'immédiat. Pour chaque répartition possible, nous résolvons le problème de l'ordonnancement des calculs et de mapping des données. À la fin, nous choisissons la ou les solutions pareto. Nous proposons un outil, front-end de la SHN, qui est capable d'appliquer l'optimisation algorithmique du point 2 à un algorithme de traitement d'image spécifié par l'utilisateur. L'outil produit en sortie un modèle algorithmique optimisé pour la SHN, en customisant une architecture générique.
175

PDEModelica – A High-Level Language for Modeling with Partial Differential Equations

Saldamli, Levon January 2006 (has links)
This thesis describes work on a new high-level mathematical modeling language and framework called PDEModelica for modeling with partial differential equations. It is an extension to the current Modelica modeling language for object-oriented, equation-based modeling based on differential and algebraic equations. The language extensions and the framework presented in this thesis are consistent with the concepts of Modelica while adding support for partial differential equations and space-distributed variables called fields. The specification of a partial differential equation problem consists of three parts: 1) the description of the definition domain, i.e., the geometric region where the equations are defined, 2) the initial and boundary conditions, and 3) the actual equations. The known and unknown distributed variables in the equation are represented by field variables in PDEModelica. Domains are defined by a geometric description of their boundaries. Equations may use the Modelica derivative operator extended with support for partial derivatives, or vector differential operators such as divergence and gradient, which can be defined for general curvilinear coordinates based on coordinate system definitions. The PDEModelica system also allows the partial differential equation models to be defined using a coefficient-based approach, where PDE models from a library are instantiated with different parameter values. Such a library contains both continuous and discrete representations of the PDE model. The user can instantiate the continuous parts and define the parameters, and the discrete parts containing the equations are automatically instantiated and used to solve the PDE problem numerically. Compared to most earlier work in the area of mathematical modeling languages supporting PDEs, this work provides a modern object-oriented component-based approach to modeling with PDEs, including general support for hierarchical modeling, and for general, complex geometries. It is possible to separate the geometry definition from the model definition, which allows geometries to be defined separately, collected into libraries, and reused in new models. It is also possible to separate the analytical continuous model description from the chosen discretization and numerical solution methods. This allows the model description to be reused, independent of different numerical solution approaches. The PDEModelica field concept allows general declaration of spatially distributed variables. Compared to most other approaches, the field concept described in this work affords a clearer abstraction and defines a new type of variable. Arrays of such field variables can be defined in the same way as arrays of regular, scalar variables. The PDEModelica language supports a clear, mathematical syntax that can be used both for equations referring to fields and explicit domain specifications, used for example to specify boundary conditions. Hierarchical modeling and decomposition is integrated with a general connection concept, which allows connections between ODE/DAE and PDE based models. The implementation of a Modelica library needed for PDEModelica and a prototype implementation of field variables are also described in the thesis. The PDEModelica library contains internal and external solver implementations, and uses external software for mesh generation, requisite for numerical solution of the PDEs. Finally, some examples modeled with PDEModelica and solved using these implementations are presented.
176

Platform Independent Real-Time X3D Shaders and their Applications in Bioinformatics Visualization

Liu, Feng 12 January 2007 (has links)
Since the introduction of programmable Graphics Processing Units (GPUs) and procedural shaders, hardware vendors have each developed their own individual real-time shading language standard. None of these shading languages is fully platform independent. Although this real-time programmable shader technology could be developed into 3D application on a single system, this platform dependent limitation keeps the shader technology away from 3D Internet applications. The primary purpose of this dissertation is to design a framework for translating different shader formats to platform independent shaders and embed them into the eXtensible 3D (X3D) scene for 3D web applications. This framework includes a back-end core shader converter, which translates shaders among different shading languages with a middle XML layer. Also included is a shader library containing a basic set of shaders that developers can load and add shaders to. This framework will then be applied to some applications in Biomolecular Visualization.
177

High-Level Parallel Programming of Computation-Intensive Algorithms on Fine-Grained Architecture

Cheema, Fahad Islam January 2009 (has links)
Computation-intensive algorithms require a high level of parallelism and programmability, which make them good candidate for hardware acceleration using fine-grained processor arrays. Using Hardware Description Language (HDL), it is very difficult to design and manage fine-grained processing units and therefore High-Level Language (HLL) is a preferred alternative. This thesis analyzes HLL programming of fine-grained architecture in terms of achieved performance and resource consumption. In a case study, highly computation-intensive algorithms (interpolation kernels) are implemented on fine-grained architecture (FPGA) using a high-level language (Mitrion-C). Mitrion Virtual Processor (MVP) is extracted as an application-specific fine-grain processor array, and the Mitrion development environment translates high-level design to hardware description (HDL). Performance requirements, parallelism possibilities/limitations and resource requirement for parallelism vary from algorithm to algorithm as well as by hardware platform. By considering parallelism at different levels, we can adjust the parallelism according to available hardware resources and can achieve better adjustment of different tradeoffs like gates-performance and memory-performance tradeoffs. This thesis proposes different design approaches to adjust parallelism at different design levels. For interpolation kernels, different parallelism levels and design variants are proposed, which can be mixed to get a well-tuned application and resource specific design.
178

Segmentation and structuring of video documents for indexing applications

Tapu, Ruxandra Georgina 07 December 2012 (has links) (PDF)
Recent advances in telecommunications, collaborated with the development of image and video processing and acquisition devices has lead to a spectacular growth of the amount of the visual content data stored, transmitted and exchanged over Internet. Within this context, elaborating efficient tools to access, browse and retrieve video content has become a crucial challenge. In Chapter 2 we introduce and validate a novel shot boundary detection algorithm able to identify abrupt and gradual transitions. The technique is based on an enhanced graph partition model, combined with a multi-resolution analysis and a non-linear filtering operation. The global computational complexity is reduced by implementing a two-pass approach strategy. In Chapter 3 the video abstraction problem is considered. In our case, we have developed a keyframe representation system that extracts a variable number of images from each detected shot, depending on the visual content variation. The Chapter 4 deals with the issue of high level semantic segmentation into scenes. Here, a novel scene/DVD chapter detection method is introduced and validated. Spatio-temporal coherent shots are clustered into the same scene based on a set of temporal constraints, adaptive thresholds and neutralized shots. Chapter 5 considers the issue of object detection and segmentation. Here we introduce a novel spatio-temporal visual saliency system based on: region contrast, interest points correspondence, geometric transforms, motion classes' estimation and regions temporal consistency. The proposed technique is extended on 3D videos by representing the stereoscopic perception as a 2D video and its associated depth
179

Designing application-specific processors for image processing : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science, Massey University, Palmerston North, New Zealand

Bishell, Aaron January 2008 (has links)
Implementing a real-time image-processing algorithm on a serial processor is difficult to achieve because such a processor cannot cope with the volume of data in the low-level operations. However, a parallel implementation, required to meet timing constraints for the low-level operations, results in low resource utilisation when implementing the high-level operations. These factors suggested a combination of parallel hardware, for the low-level operations, and a serial processor, for the high-level operations, for implementing a high-level image-processing algorithm. Several types of serial processors were available. A general-purpose processor requires an extensive instruction set to be able to execute any arbitrary algorithm resulting in a relatively complex instruction decoder and possibly extra FUs. An application-specific processor, which was considered in this research, implements enough FUs to execute a given algorithm and implements a simpler, and more efficient, instruction decoder. In addition, an algorithms behaviour on a processor could be represented in either hardware (i.e. hardwired logic), which limits the ability to modify the algorithm behaviour of a processor, or “software” (i.e. programmable logic), which enables external sources to specify the algorithm behaviour. This research investigated hardware- and software- controlled application-specific serial processors for the implementation of high-level image-processing algorithms and compared these against parallel hardware and general-purpose serial processors. It was found that application-specific processors are easily able to meet the timing constraints imposed by real-time high-level image processing. In addition, the software-controlled processors had additional flexibility, a performance penalty of 9.9% and 36.9% and inconclusive footprint savings (and costs) when compared to hardwarecontrolled processors.
180

Machine learning and dynamic programming algorithms for motion planning and control

Arslan, Oktay 07 January 2016 (has links)
Robot motion planning is one of the central problems in robotics, and has received considerable amount of attention not only from roboticists but also from the control and artificial intelligence (AI) communities. Despite the different types of applications and physical properties of robotic systems, many high-level tasks of autonomous systems can be decomposed into subtasks which require point-to-point navigation while avoiding infeasible regions due to the obstacles in the workspace. This dissertation aims at developing a new class of sampling-based motion planning algorithms that are fast, efficient and asymptotically optimal by employing ideas from Machine Learning (ML) and Dynamic Programming (DP). First, we interpret the robot motion planning problem as a form of a machine learning problem since the underlying search space is not known a priori, and utilize random geometric graphs to compute consistent discretizations of the underlying continuous search space. Then, we integrate existing DP algorithms and ML algorithms to the framework of sampling-based algorithms for better exploitation and exploration, respectively. We introduce a novel sampling-based algorithm, called RRT#, that improves upon the well-known RRT* algorithm by leveraging value and policy iteration methods as new information is collected. The proposed algorithms yield provable guarantees on correctness, completeness and asymptotic optimality. We also develop an adaptive sampling strategy by considering exploration as a classification (or regression) problem, and use online machine learning algorithms to learn the relevant region of a query, i.e., the region that contains the optimal solution, without significant computational overhead. We then extend the application of sampling-based algorithms to a class of stochastic optimal control problems and problems with differential constraints. Specifically, we introduce the Path Integral - RRT algorithm, for solving optimal control of stochastic systems and the CL-RRT# algorithm that uses closed-loop prediction for trajectory generation for differential systems. One of the key benefits of CL-RRT# is that for many systems, given a low-level tracking controller, it is easier to handle differential constraints, so complex steering procedures are not needed, unlike most existing kinodynamic sampling-based algorithms. Implementation results of sampling-based planners for route planning of a full-scale autonomous helicopter under the Autonomous Aerial Cargo/Utility System Program (AACUS) program are provided.

Page generated in 0.5542 seconds