• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 28
  • 20
  • 19
  • 10
  • 6
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 286
  • 286
  • 106
  • 68
  • 46
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • 33
  • 32
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Flot de conception pour l'ultra faible consommation : échantillonnage non-uniforme et électronique asynchrone / Design flow for ultra-low power : non-uniform sampling and asynchronous circuits

Simatic, Jean 07 December 2017 (has links)
Les systèmes intégrés sont souvent des systèmes hétérogènes avec des contraintes fortes de consommation électrique. Ils embarquent aujourd'hui des actionneurs, des capteurs et des unités pour le traitement du signal. Afin de limiter l'énergie consommée, ils peuvent tirer profit des techniques évènementielles que sont l'échantillonnage non uniforme et l'électronique asynchrone. En effet, elles permettent de réduire drastiquement la quantité de données échantillonnées pour de nombreuses classes de signaux et de diminuer l'activité. Pour aider les concepteurs à développer rapidement des plateformes exploitant ces deux techniques évènementielles, nous avons élaboré un flot de conception nommé ALPS. Il propose un environnement permettant de déterminer et de simuler au niveau algorithmique le schéma d'échantillonnage et les traitements associés afin de sélectionner les plus efficients en fonction de l'application ciblée. ALPS génère directement le convertisseur analogique/numérique à partir des paramètres d'échantillonnage choisis. L'élaboration de la partie de traitement s'appuie quant à elle sur un outil de synthèse de haut niveau synchrone et une méthode de désynchronisation exploitant des protocoles asynchrones spécifiques, capables d'optimiser la surface et la consommation du circuit. Enfin, des simulations au niveau porteslogiques permettent d'analyser et de valider l'énergie consommée avant de poursuivre par un flot classique de placement et routage. Les évaluations conduites montrent une réduction d'un facteur 3 à 8 de la consommation des circuits automatiquement générés. Le flot ALPS permet à un concepteur non-spécialiste de se concentrer sur l'optimisation de l'échantillonnage et de l'algorithme en fonction de l'application et de potentiellement réduire d'un ou plusieurs ordres de grandeur la consommation du circuit. / Integrated systems are mainly heterogeneous systems with strong powerconsumption constraints. They embed actuators, sensors and signalprocessing units. To limit the energy consumption, they can exploitevent-based techniques, namely non-uniform sampling and asynchronouscircuits. Indeed, they allow cutting drastically the amount of sampleddata for many types of signals and reducing the system activity. To helpdesigners in quickly developing platforms that exploit those event-basedtechniques, we elaborated a design framework called ALPS. It proposes anenvironment to determine and simulate at algorithmic level the samplingscheme and the associated processing in order to select the mostefficient ones depending on the targetted application. ALPS generatesdirectly the analog-to-digital converter based on the chosen samplingparameters. The elaboration of the processing unit uses a synchronoushigh-level synthesis tool and a desynchronization method that exploitsspecific asynchronous protocols to optimize the circuit area and powerconsumption. Finally, gate-level simulations allow analyzing andvalidating the energy consumption before continuing with a standardplacement and routing flow. The conducted evaluations show a reductionfactor of 3 to 8 of the consumption of the automatically generatedcirctuis. The flow ALPS allow non-specialists to concentrate on theoptimization of the sampling and the processing in function of theirapplication and to reduice the circuit power consumptions by one toseveral orders of magnitude.
242

Squelettes algorithmiques pour la programmation et l'exécution efficaces de codes parallèles / Algorithmic skeletons for efficient programming and execution of parallel codes

Legaux, Joeffrey 13 December 2013 (has links)
Les architectures parallèles sont désormais présentes dans tous les matériels informatiques, mais les programmeurs ne sont généralement pas formés à leur programmation dans les modèles explicites tels que MPI ou les Pthreads. Il y a un besoin important de modèles plus abstraits tels que les squelettes algorithmiques qui sont une approche structurée. Ceux-ci peuvent être vus comme des fonctions d’ordre supérieur synthétisant le comportement d’algorithmes parallèles récurrents que le développeur peut ensuite combiner pour créer ses programmes. Les développeurs souhaitent obtenir de meilleures performances grâce aux programmes parallèles, mais le temps de développement est également un facteur très important. Les approches par squelettes algorithmiques fournissent des résultats intéressants dans ces deux aspects. La bibliothèque Orléans Skeleton Library ou OSL fournit un ensemble de squelettes algorithmiques de parallélisme de données quasi-synchrones dans le langage C++ et utilise des techniques de programmation avancées pour atteindre une bonne efficacité. Nous avons amélioré OSL afin de lui apporter de meilleures performances et une plus grande expressivité. Nous avons voulu analyser le rapport entre les performances des programmes et l’effort de programmation nécessaire sur OSL et d’autres modèles de programmation parallèle. La comparaison rigoureuse entre des programmes parallèles dans OSL et leurs équivalents de bas niveau montre une bien meilleure productivité pour les modèles de haut niveau qui offrent une grande facilité d’utilisation tout en produisant des performances acceptables. / Parallel architectures have now reached every computing device, but software developers generally lackthe skills to program them through explicit models such as MPI or the Pthreads. There is a need for moreabstract models such as the algorithmic skeletons which are a structured approach. They can be viewed ashigher order functions that represent the behaviour of common parallel algorithms, and those are combinedby the programmer to generate parallel programs. Programmers want to obtain better performances through the usage of parallelism, but the development time implied is also an important factor. Algorithmic skeletons provide interesting results in both those fields. The Orléans Skeleton Library or OSL provides a set of algorithmic skeletons for data parallelism within the bulk synchronous parallel model for the C++ language. It uses advanced metaprogramming techniques to obtain good performances. We improved OSL in order to obtain better performances from its generated programs, and extended its expressivity. We wanted to analyze the ratio between the performance of programs and the development effort needed within OSL and other parallel programming models. The comparison between parallel programs written within OSL and their equivalents in low level parallel models shows a better productivity for high level models : they are easy to use for the programmers while providing decent performances.
243

Des systèmes à base de composants aux implémentations cadencées par le temps : une approche correcte par conception / From timed component-based systems to time-triggered implementations : a correct-by-design approach

Guesmi, Hela 27 October 2017 (has links)
Dans le domaine des systèmes temps-réel embarqués critiques, les méthodes de conception et de spécification et leurs outils associés doivent permettre le développement de systèmes au comportement temporel déterministe et, par conséquent, reproductible afin de garantir leur sûreté de fonctionnement. Pour atteindre cet objectif, on s’intéresse aux méthodologies de développement basées sur le paradigme Time-Triggered (TT). Dans ce contexte, nombre de propriétés et, en particulier, les contraintes temps-réel de-bout-en-bout, se voient satisfaites par construction. Toutefois, garantir la sûreté de fonctionnement de tels systèmes reste un défi. En général, les outils de développement existants n’assurent pas par construction le respect de l’intégralité des spécifications, celles-ci doivent, en général, être vérifiées à posteriori. Avec la complexité croissante des applications embarquées, celle de leur validation à posteriori devient, au mieux, un facteur majeur dans les coûts de développement et, au pire, tout simplement impossible. Il faut, donc, définir une méthode qui, tout en permettant le développement des systèmes corrects par constructions, structure et simplifie le processus de spécification. Les méthodologies de conception de haut niveau à base de composants, qui permettent la conception et la vérification des systèmes temps-réels critiques, présentent une solution ultime pour la structuration et la simplification du processus de spécification de tels systèmes.L’objectif de cette thèse est d'associer la méthodologie BIP (Behaviour-Interaction-Priority) qui est une approche de conception basée sur composants avec la plateforme d'exécution PharOS, qui est un système d'exploitation temps-réel déterministe orienté sûreté de fonctionnement. Le flot de conception proposé dans cette thèse est une approche transformationnelle qui permet de conserver les propriétés fonctionnelles des modèles originaux de BIP. Il est composé essentiellement de deux étapes. La première étape, paramétrée par un mapping de tâche défini par l'utilisateur, permet de transformer un modèle BIP en un modèle plus restreint qui représente une description haut niveau des implémentations basées sur des primitives de communication TT. La deuxième étape permet la génération du code pour la plateforme PharOS à partir de ce modèle restreint.Un ensemble d'outils a été développé dans cette thèse afin d'automatiser la plupart des étapes du flot de conception proposé. Ceci a permis de tester cette approche sur deux cas d'étude industriels ; un simulateur de vol et un relais de protection moyenne tension. Dans les deux applications, on vise à comparer les fonctionnalités du modèle BIP avec celles du modèle intermédiaire et du code généré. On fait varier les stratégies de mapping de tâche dans la première application, afin de tester leur impact sur le code généré. Dans la deuxième application, on étudie l'impact de la transformation sur le code généré en comparant quelques aspects de performance du code générer avec ceux d'une version de l'application qui a été développée manuellement. / In hard real-time embedded systems, design and specification methods and their associated tools must allow development of temporally deterministic systems to ensure their safety. To achieve this goal, we are specifically interested in methodologies based on the Time-Triggered (TT) paradigm. This paradigm allows preserving by construction number of properties, in particular, end-to-end real-time constraints. However, ensuring correctness and safety of such systems remains a challenging task. Existing development tools do not guarantee by construction specification respect. Thus, a-posteriori verification of the application is generally a must. With the increasing complexity of embedded applications, their a-posteriori validation becomes, at best, a major factor in the development costs and, at worst, simply impossible. It is necessary, therefore, to define a method that allows the development of correct-by-construction systems while simplifying the specification process.High-level component-based design frameworks that allow design and verification of hard real-time systems are very good candidates for structuring the specification process as well as verifying the high-level model.The goal of this thesis is to couple a high-level component-based design approach based on the BIP (Behaviour-Interaction-Priority) framework with a safety-oriented real-time execution platform implementing the TT approach (the PharOS Real-Time Operating System). To this end, we propose an automatic transformation process from BIPmodels into applications for the target platform (i.e. PharOS).The process consists in a two-step semantics-preserving transformation. The first step transforms a BIP model coupled to a user-defined task mapping into a restricted one, which lends itself well to an implementation based on TT communication primitives. The second step transforms the resulting model into the TT implementation provided by the PharOS RTOS.We provide a tool-flow that automates most of the steps of the proposed approach and illustrate its use on an industrial case study for a flight Simulator application and a medium voltage protection relay application. In both applications, we compare functionalities of both original, intermediate and final model in order to confirm the correctness of the transformation. For the first application, we study the impact of the task mapping on the generated implementation. And for the second application, we study the impact of the transformation on some performance aspects compared to a manually written version.
244

Factors predicting success in the final qualifying examination for chartered accountants

Wessels, Sally 11 1900 (has links)
Anyone desiring to qualify as an accountant or auditor is required to pass an examination as approved by the Public Accountants' and Auditors' Board to establish whether candidates have attained the required standard of academic knowledge in terms of the syllabi laid down by the Board, as well as whether they are able to apply that knowledge in practice (P AAB, 1995). However each year many students fail this very important examination. The reasons for this are not clear and the purpose of this research is to determine whether: personality; vocational interests; intelligence; matriculation Mathematics and home language (English/ Afrikaans) results, predict success in the QE, by comparing a group of successful and unsuccessful QE candidates. The logistic regression, discriminant analysis and t-test statistical procedures, indicated that: warmth (A), liveliness (F), rule-consciousness (G), social boldness (H), apprehension (0), self-reliance (Q2), perfectionism (Q3), tension (Q4), computational interest, social services interest, mechanical interest, Mental Alertness and matriculation home language, are significant factors to consider when identifying candidates likely to be successful in the QE. / Industrial and Organisational Psychology / MCOM (Industrial Psychology)
245

Méthode de modélisation et de raffinement pour les systèmes hétérogènes. Illustration avec le langage System C-AMS / Study and development of a AMS design-flow in SytemC : semantic, refinement and validation

Paugnat, Franck 25 October 2012 (has links)
Les systèmes sur puces intègrent aujourd’hui sur le même substrat des parties analogiques et des unités de traitement numérique. Tandis que la complexité de ces systèmes s’accroissait, leur temps de mise sur le marché se réduisait. Une conception descendante globale et coordonnée du système est devenue indispensable de façon à tenir compte des interactions entre les parties analogiques et les partis numériques dès le début du développement. Dans le but de répondre à ce besoin, cette thèse expose un processus de raffinement progressif et méthodique des parties analogiques, comparable à ce qui existe pour le raffinement des parties numériques. L'attention a été plus particulièrement portée sur la définition des niveaux analogiques les plus abstraits et à la mise en correspondance des niveaux d’abstraction entre parties analogiques et numériques. La cohérence du raffinement analogique exige de détecter le niveau d’abstraction à partir duquel l’utilisation d’un modèle trop idéalisé conduit à des comportements irréalistes et par conséquent d’identifier l’étape du raffinement à partir de laquelle les limitations et les non linéarités aux conséquences les plus fortes sur le comportement doivent être introduites. Cette étape peut être d’un niveau d'abstraction élevé. Le choix du style de modélisation le mieux adapté à chaque niveau d'abstraction est crucial pour atteindre le meilleur compromis entre vitesse de simulation et précision. Les styles de modélisations possibles à chaque niveau ont été examinés de façon à évaluer leur impact sur la simulation. Les différents modèles de calcul de SystemC-AMS ont été catégorisés dans cet objectif. Les temps de simulation obtenus avec SystemC-AMS ont été comparés avec Matlab Simulink. L'interface entre les modèles issus de l'exploration d'architecture, encore assez abstraits, et les modèles plus fin requis pour l'implémentation, est une question qui reste entière. Une bibliothèque de composants électroniques complexes décrits en SystemC-AMS avec le modèle de calcul le plus précis (modélisation ELN) pourrait être une voie pour réussir une telle interface. Afin d’illustrer ce que pourrait être un élément d’une telle bibliothèque et ainsi démontrer la faisabilité du concept, un modèle d'amplificateur opérationnel a été élaboré de façon à être suffisamment détaillé pour prendre en compte la saturation de la tension de sortie et la vitesse de balayage finie, tout en gardant un niveau d'abstraction suffisamment élevé pour rester indépendant de toute hypothèse sur la structure interne de l'amplificateur ou la technologie à employer. / Systems on Chip (SoC) embed in the same chip analogue parts and digital processing units. While their complexity is ever increasing, their time to market is becoming shorter. A global and coordinated top-down design approach of the whole system is becoming crucial in order to take into account the interactions between the analogue and digital parts since the beginning of the development. This thesis presents a systematic and gradual refinement process for the analogue parts comparable to what exists for the digital parts. A special attention has been paid to the definition of the highest abstracted analogue levels and to the correspondence between the analogue and the digital abstraction levels. The analogue refinement consistency requires to detect the abstraction level where a too idealised model leads to unrealistic behaviours. Then the refinement step consist in introducing – for instance – the limitations and non-linearities that have a strong impact on the behaviour. Such a step can be done at a relatively high level of abstraction. Correctly choosing a modelling style, that suits well an abstraction level, is crucial to obtain the best trade-off between the simulation speed and the accuracy. The modelling styles at each abstraction level have been examined to understand their impact on the simulation. The SystemC-AMS models of computation have been classified for this purpose. The SystemC-AMS simulation times have been compared to that obtained with Matlab Simulink. The interface between models arisen from the architectural exploration – still rather abstracted – and the more detailed models that are required for the implementation, is still an open question. A library of complex electronic components described with the most accurate model of computation of SystemC-AMS (ELN modelling) could be a way to achieve such an interface. In order to show what should be an element of such a library, and thus prove the concept, a model of an operational amplifier has been elaborated. It is enough detailed to take into account the output voltage saturation and the finite slew rate of the amplifier. Nevertheless, it remains sufficiently abstracted to stay independent from any architectural or technological assumption.
246

Génération de modèles de haut niveau enrichis pour les systèmes hétérogènes et multiphysiques / Generating high level enriched models for heterogeneous and muliphysics systems

Bousquet, Laurent 29 January 2014 (has links)
Les systèmes sur puce sont de plus en plus complexes : ils intègrent des parties numériques, desparties analogiques et des capteurs ou actionneurs. SystemC et son extension SystemC AMSpermettent aujourd’hui de modéliser à haut niveau d’abstraction de tels systèmes. Ces outilsconstituent de véritables atouts dans une optique d’étude de faisabilité, d’exploration architecturale etde vérification du fonctionnement global des systèmes complexes hétérogènes et multiphysiques. Eneffet, les durées de simulation deviennent trop importantes pour envisager les simulations globales àbas niveau d’abstraction. De plus, les simulations basées sur l’utilisation conjointe de différents outilsprovoquent des problèmes de synchronisation. Les modèles de bas niveau, une fois crées par lesspécialistes des différents domaines peuvent toutefois être abstraits afin de générer des modèles dehaut niveau simulables sous SystemC/SystemC AMS en des temps de simulation réduits. Une analysedes modèles de calcul et des styles de modélisation possibles est d’abord présentée afin d’établir unlien avec les durées de simulation, ceci pour proposer un style de modélisation en fonction du niveaud’abstraction souhaité et de l’ampleur de la simulation à effectuer. Dans le cas des circuits analogiqueslinéaires, une méthode permettant de générer automatiquement des modèles de haut niveaud’abstraction à partir de modèles de bas niveau a été proposée. Afin d’évaluer très tôt dans le flot deconception la consommation d’un système, un moyen d’enrichir les modèles de haut niveaupréalablement générés est présenté. L’attention a ensuite été portée sur la modélisation à haut niveaudes systèmes multiphysiques. Deux méthodes y sont discutées : la méthode consistant à utiliser lecircuit équivalent électrique puis la méthode basée sur les bond graphs. En particulier, nous proposonsune méthode permettant de générer un modèle équivalent au bond graph à partir d’un modèle de basniveau. Enfin, la modélisation d’un système éolien est étudiée afin d’illustrer les différents conceptsprésentés dans cette thèse. / Systems on chip are more and more complex as they now embed not only digital and analog parts, butalso sensors and actuators. SystemC and its extension SystemC AMS allow the high level modeling ofsuch systems. These tools are efficient for feasibility study, architectural exploration and globalverification of heterogeneous and multiphysics systems. At low level of abstraction, the simulationdurations are too important. Moreover, synchronization problems appear when cosimulations areperformed. It is possible to abstract the low level models that are developed by the specialists of thedifferent domains to create high level models that can be simulated faster using SystemC/SystemCAMS. The models of computation and the modeling styles have been studied. A relation is shownbetween the modeling style, the model size and the simulation speed. A method that generatesautomatically the high level model of an analog linear circuit from its low level representation isproposed. Then, it is shown how to include in the high level model some information allowing thepower consumption estimation. After that, the multiphysics systems modeling is studied. Twomethods are discussed: firstly, the one that uses the electrical equivalent circuit, then the one based onthe bond graph approach. It is shown how to generate a bond graph equivalent model from a low levelrepresentation. Finally, the modeling of a wind turbine system is discussed in order to illustrate thedifferent concepts presented in this thesis.
247

Produção urbana da cidade contemporânea: os rebatimentos morfológicos dos condomínios urbanísticos e loteamentos fechados de alto padrão da Avenida Professor João Fiúsa e Rodovia José Fregonesi no tecido urbano de Ribeirão Preto/SP / Production of city urban contemporary: repercussions morphological and lots of condominiums urban closed high standard of Professor John Avenue and Highway Fiusa Fregonesi Joseph in Urban Fabric Ribeirão Preto/SP

Tânia Maria Bulhões Figueira 25 April 2013 (has links)
O trabalho analisa as dinâmicas territoriais contemporâneas e os fluxos de metropolização promovidos em áreas de expansão urbana, tendo como estudo Ribeirão Preto, cidade de médio porte localizada no interior do estado de São Paulo/Brasil. O município, com área de 650,955 Km², apresenta 604.682 habitantes, conforme o censo de 2010 promovido pelo IBGE-Instituto Brasileiro de Geografia e Estatística. É um dos principais parques agroindustriais brasileiros compondo a terceira região de maior relevância econômica do estado de São Paulo - principal região econômica do país -, com um produto interno bruto per capita igual a 28.100,52 reais [sendo o produto interno bruto per capita brasileiro igual a 21.252,41 reais, segundo o mesmo censo]. O período entre a década de 1980 e os anos 2000 foi marcado por um extraordinário desenvolvimento econômico da região de Ribeirão Preto com desdobramentos na urbanização de seu território contíguo. De forma semelhante ao que ocorreu nas principais metrópoles brasileiras, a cidade passou a produzir e experimentar situações urbanas decorrentes das novas lógicas de organização econômica e social, com particular articulação em relação aos interesses imobiliários. A lógica do mercado imobiliário, coligada ao modelo de acumulação vigente nos últimos quarenta anos - marcado pela financeirização da economia -, possui rebatimentos na configuração do espaço urbano. A privatização de frações consideráveis do território, principalmente em áreas de expansão, apresenta-se como produto e preceito da conformação espacial atual, colaborando para o acirramento de processos de segregação morfológica e social dos ambientes urbanos e de transformação dos valores públicos e culturais. Este modelo de expansão, cindido da conformação histórica da cidade e alimentado pela flexibilização da legislação urbana, cria condições para o surgimento de problemas que associam um desenho urbano tributário da iniciativa privada a processos de gentrification. A resultante é uma urbanização dispersa, contudo, conectada à estrutura urbana existente por um viário que estimula o transporte individual em detrimento de sistemas coletivos. O problema de tal constituição urbana não está no fato de responder às demandas provenientes do novo modelo de acumulação, mas sim de reduzir-se apenas a isso, voltando-se exclusivamente às dinâmicas econômicas e, portanto, estando divorciada das dimensões políticas e de cidadania da sociedade. O trabalho busca compreender as novas produções em curso dos espaços urbanos, investigando as privatizações de áreas significativas do território de Ribeirão Preto: os condomínios urbanísticos e loteamentos fechados de alto padrão [de usos habitacionais e mistos] localizados em áreas de expansão urbana, particularmente implantados em regiões adjacentes à Avenida Professor João Fiúsa e à Rodovia José Fregonesi [SP-328], os quais parecem prescindir do conceito de cidade conformada historicamente, produzindo no limite [e contraditoriamente] um urbanismo sem cidade. / The work analyzes the current territorial dynamics and its metropolization flows at urban growth areas. The city chosen as the object of study was Ribeirão Preto, a São Paulo state inner city, which is classified as a medium-sized one. It has a population of 604.682 inhabitants in a 650,955 Km² area according to the 2010 census. Well known as one of the main agribusiness centers in the country, Ribeirão Preto represents the third most important economy of São Paulo state and plays a major role in the Brazilian economy. Contrasting with Brazil GDP of R$21.252,41, Ribeirão Preto has a GDP of R$28.100,52, both values per capita. Between 1980 and 2000 decades a remarkable economic development and urbanization improvement were noticed at Ribeirão Preto. As other major Brazilian metropolis, the city began to produce and experience urban situations derived from novel economic and social logics of organization with a particular articulation connected to real estate interests. The property market logic linked to an accumulation model - marked by economy financialisation -, which has been applied in the last forty years, has reverberated on urban space structural configuration. The privatization of significant fractions of the urban territory is presented as a product and provision of current spaces conformation, especially in their expansion areas. It contributes to worsening some urban processes with regards to morphological and social segregation and the transformation of public and cultural values. This urban expansion model is interpreted as one whose historical values are diminished or even not existent. It is fueled by the easing of urban legislation and increases problems involving an urban design derived from private initiatives to the gentrification process. The result is an urban sprawl which is connected to the urban sites through highways systems that stimulates individualities rather than a sense of collectiveness. The problem highlighted by this urban constitution is not only related to its response of economical demands, but it is reduced exclusively to that. This urban model has been accumulating several negative critiques, particularly concerning the divorce between the political and social dimensions of society. Based on it, the work aims the understanding of the redefinition of urban spaces. Hence, some urban private areas that exemplify this dynamic were selected: the high level private condominiums located at expansion areas, especially on Professor João Fiúsa Avenue and José Fregonesi Highway, which seems to abstract the whole concept of a city shaped historically, producing at most [and contradictorily] urban spaces without an actual city.
248

Desenvolvimento de processo de obtenção de nanopartículas de sílica a partir de resíduo de fonte renovável e incorporação em polímero termoplástico para a fabricação de nanocompósito / Development of silica nanoparticles obtaintion process from renewable source waste and its incorporation in thermoplastic polymer for manufacturing a nanocomposite

ORTIZ, ANGEL V. 25 May 2017 (has links)
Submitted by Marco Antonio Oliveira da Silva (maosilva@ipen.br) on 2017-05-25T11:35:08Z No. of bitstreams: 0 / Made available in DSpace on 2017-05-25T11:35:08Z (GMT). No. of bitstreams: 0 / A tecnologia de nanocompósitos é aplicável a uma vasta gama de polímeros termoplásticos e termofixos. A utilização de subprodutos da cana de açúcar tem sido extensivamente estudada como fonte de reforços para os nanocompósitos. O bagaço da cana é largamente utilizado na cogeração de energia e, como resultado da queima deste material, são produzidas milhões de toneladas de cinzas. Para este trabalho, sílica contida nas cinzas do bagaço da cana de açúcar foi extraída por método químico e método térmico. O método térmico se mostrou mais eficiente levando a uma pureza de mais de 93 % em sílica, enquanto o método químico gerou sílica bastante contaminada com cloro e sódio provenientes dos reagentes da extração. As partículas de sílica obtidas foram avaliadas por espalhamento de luz dinâmico (DSL) e apresentaram tamanho médio de 12 μm. Estas partículas foram submetidas à moagem em moinho de bolas e na sequência a tratamento sonoquímico em meio líquido. As partículas de sílica tratadas no processo sonoquímico a 20 kHz, potência de 500 W e 90 minutos tiveram suas dimensões reduzidas a escala nanométrica da ordem de dezenas de nanômetros. A nanossílica obtida foi então incorporada como reforço em polietileno de alta densidade (HDPE). Ensaios mecânicos e termo-mecânicos mostram ganhos de propriedades mecânicas, com exceção da propriedade de resistência ao impacto. O ensaio de deflexão térmica (HDT) mostrou que a incorporação deste reforço no HDPE levou a um pequeno aumento nesta propriedade relação ao HDPE puro. A cristalinidade dos nanocompósitos gerados foi avaliada por meio de calorimetria exploratória diferencial (DSC) e observou-se um decréscimo de cristalinidade do material quando a incorporação de reforço foi de 3%. O material irradiado a 250 kGy com feixe de elétrons mostra ganhos acentuados na principais propriedades do mesmo, principalmente devido ao alto nível de reticulação do HDPE irradiado. / Tese (Doutorado em Tecnologia Nuclear) / IPEN/T / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
249

Renda e gastos com educação de nível superior

Thomé, Francisco Augusto Seixas 31 May 2012 (has links)
Submitted by Francisco Augusto Seixas Thomé (francisco.thome@fgv.br) on 2013-01-03T21:38:35Z No. of bitstreams: 1 Thomé.pdf: 1674130 bytes, checksum: b3ec4566d7dfee1d5ea069cfedaa082a (MD5) / Approved for entry into archive by Vitor Souza (vitor.souza@fgv.br) on 2013-01-15T13:09:01Z (GMT) No. of bitstreams: 1 Thomé.pdf: 1674130 bytes, checksum: b3ec4566d7dfee1d5ea069cfedaa082a (MD5) / Made available in DSpace on 2013-02-04T18:21:25Z (GMT). No. of bitstreams: 1 Thomé.pdf: 1674130 bytes, checksum: b3ec4566d7dfee1d5ea069cfedaa082a (MD5) Previous issue date: 2012-05-31 / This study intends to verify how inelastic is the spending of money, with higher education in relation to the income. We found that families with higher income, spend more on that kind of education than those of lower. We observed also in Brazil, that as higher the incomes more is spent on high level education, but this correlation is inelastic, with an increase of 1,0% on the month income, carries 0,31% increase in monthly expenditure on tertiary education. In relation to the amount spent on education, the family income, we may observe that when the family income increases in certain geographic regions, a small part of it is reserved for high level education than in other regions, as we could verify. This suggests that families with high income levels, will not be affected when deciding to invest more in education to have a better quality of education compared to others. We may observe that among the brazilian regions, there are differences that often come from the number of residents and educational differences, usually in the same family. In families with higher income, we found often that part of this increase was forwarded to other activities, and this will not change so much its decision on investing in university education. It was verified that this occurs in the Southeast and South, because these locations revenue is above the national average and the number of residents per household is relatively lower. We also observed that in these regions the ratio of student is higher, confirming that they are the ones with better economic conditions and thus, they have better opportunity to invest in education. / O estudo em questão pretende verificar, o quão é inelástico o gasto com a educação de nível superior em relação à renda. Verificamos que os domicílios com maior renda há um gasto maior dos que os de menor renda. O que também foi verificado no Brasil é que, quanto maior a renda, maior é o gasto com educação de nível superior, porém esta correlação é inelástica, ou seja, com um aumento de 1,0% na renda mensal, acarreta 0,31% de aumento na despesa mensal com educação de nível superior. Quanto à proporção de gastos com educação na renda domiciliar, há evidências que com o aumento da renda em domicílios de certas Regiões Geográficas, há uma destinação de um percentual menor de sua renda para com os gastos em educação superior do que em outras regiões, conforme foi verificado. Isto leva a crer que em domicílios com um nível de renda maior, esta alteração de renda não influenciará tanto em sua decisão de investir mais em educação para ter um curso universitário de melhor qualidade de ensino. Pode-se observar que entre as regiões brasileiras, há diferenças que muitas vezes são oriundas da quantidade de moradores e diferenças educacionais, muitas vezes no próprio domicílio. Nos domicílios de maior renda, em um grande número de vezes, parte deste incremento de renda é alocada para outras atividades, pois isto não alterará em muito sua decisão relativa ao investimento no ensino superior. Foi verificado que isto ocorre nas Regiões Sudeste e Sul, pois nesses locais a renda é superior à média nacional e a quantidade de moradores por domicílio é relativamente menor. Observamos também que nestas regiões a relação de vagas por estudante é maior, corroborando que como são as regiões mais ricas, elas têm maior condição de investir na educação de nível superior.
250

Evaluating Vivado High-Level Synthesis on OpenCV Functions for the Zynq-7000 FPGA

Johansson, Henrik January 2015 (has links)
More complex and intricate Computer Vision algorithms combined with higher resolution image streams put bigger and bigger demands on processing power. CPU clock frequencies are now pushing the limits of possible speeds, and have instead started growing in number of cores. Most Computer Vision algorithms' performance respond well to parallel solutions. Dividing the algorithm over 4-8 CPU cores can give a good speed-up, but using chips with Programmable Logic (PL) such as FPGA's can give even more. An interesting recent addition to the FPGA family is a System on Chip (SoC) that combines a CPU and an FPGA in one chip, such as the Zynq-7000 series from Xilinx. This tight integration between the Programmable Logic and Processing System (PS) opens up for designs where C programs can use the programmable logic to accelerate selected parts of the algorithm, while still behaving like a C program. On that subject, Xilinx has introduced a new High-Level Synthesis Tool (HLST) called Vivado HLS, which has the power to accelerate C code by synthesizing it to Hardware Description Language (HDL) code. This potentially bridges two otherwise very separate worlds; the ever popular OpenCV library and FPGAs. This thesis will focus on evaluating Vivado HLS from Xilinx primarily with image processing in mind for potential use on GIMME-2; a system with a Zynq-7020 SoC and two high resolution image sensors, tailored for stereo vision.

Page generated in 0.0367 seconds