• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 28
  • 20
  • 19
  • 10
  • 6
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 285
  • 285
  • 106
  • 68
  • 46
  • 40
  • 39
  • 38
  • 37
  • 35
  • 35
  • 33
  • 32
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Squelettes algorithmiques pour la programmation et l'exécution efficaces de codes parallèles / Algorithmic skeletons for efficient programming and execution of parallel codes

Legaux, Joeffrey 13 December 2013 (has links)
Les architectures parallèles sont désormais présentes dans tous les matériels informatiques, mais les programmeurs ne sont généralement pas formés à leur programmation dans les modèles explicites tels que MPI ou les Pthreads. Il y a un besoin important de modèles plus abstraits tels que les squelettes algorithmiques qui sont une approche structurée. Ceux-ci peuvent être vus comme des fonctions d’ordre supérieur synthétisant le comportement d’algorithmes parallèles récurrents que le développeur peut ensuite combiner pour créer ses programmes. Les développeurs souhaitent obtenir de meilleures performances grâce aux programmes parallèles, mais le temps de développement est également un facteur très important. Les approches par squelettes algorithmiques fournissent des résultats intéressants dans ces deux aspects. La bibliothèque Orléans Skeleton Library ou OSL fournit un ensemble de squelettes algorithmiques de parallélisme de données quasi-synchrones dans le langage C++ et utilise des techniques de programmation avancées pour atteindre une bonne efficacité. Nous avons amélioré OSL afin de lui apporter de meilleures performances et une plus grande expressivité. Nous avons voulu analyser le rapport entre les performances des programmes et l’effort de programmation nécessaire sur OSL et d’autres modèles de programmation parallèle. La comparaison rigoureuse entre des programmes parallèles dans OSL et leurs équivalents de bas niveau montre une bien meilleure productivité pour les modèles de haut niveau qui offrent une grande facilité d’utilisation tout en produisant des performances acceptables. / Parallel architectures have now reached every computing device, but software developers generally lackthe skills to program them through explicit models such as MPI or the Pthreads. There is a need for moreabstract models such as the algorithmic skeletons which are a structured approach. They can be viewed ashigher order functions that represent the behaviour of common parallel algorithms, and those are combinedby the programmer to generate parallel programs. Programmers want to obtain better performances through the usage of parallelism, but the development time implied is also an important factor. Algorithmic skeletons provide interesting results in both those fields. The Orléans Skeleton Library or OSL provides a set of algorithmic skeletons for data parallelism within the bulk synchronous parallel model for the C++ language. It uses advanced metaprogramming techniques to obtain good performances. We improved OSL in order to obtain better performances from its generated programs, and extended its expressivity. We wanted to analyze the ratio between the performance of programs and the development effort needed within OSL and other parallel programming models. The comparison between parallel programs written within OSL and their equivalents in low level parallel models shows a better productivity for high level models : they are easy to use for the programmers while providing decent performances.
242

Des systèmes à base de composants aux implémentations cadencées par le temps : une approche correcte par conception / From timed component-based systems to time-triggered implementations : a correct-by-design approach

Guesmi, Hela 27 October 2017 (has links)
Dans le domaine des systèmes temps-réel embarqués critiques, les méthodes de conception et de spécification et leurs outils associés doivent permettre le développement de systèmes au comportement temporel déterministe et, par conséquent, reproductible afin de garantir leur sûreté de fonctionnement. Pour atteindre cet objectif, on s’intéresse aux méthodologies de développement basées sur le paradigme Time-Triggered (TT). Dans ce contexte, nombre de propriétés et, en particulier, les contraintes temps-réel de-bout-en-bout, se voient satisfaites par construction. Toutefois, garantir la sûreté de fonctionnement de tels systèmes reste un défi. En général, les outils de développement existants n’assurent pas par construction le respect de l’intégralité des spécifications, celles-ci doivent, en général, être vérifiées à posteriori. Avec la complexité croissante des applications embarquées, celle de leur validation à posteriori devient, au mieux, un facteur majeur dans les coûts de développement et, au pire, tout simplement impossible. Il faut, donc, définir une méthode qui, tout en permettant le développement des systèmes corrects par constructions, structure et simplifie le processus de spécification. Les méthodologies de conception de haut niveau à base de composants, qui permettent la conception et la vérification des systèmes temps-réels critiques, présentent une solution ultime pour la structuration et la simplification du processus de spécification de tels systèmes.L’objectif de cette thèse est d'associer la méthodologie BIP (Behaviour-Interaction-Priority) qui est une approche de conception basée sur composants avec la plateforme d'exécution PharOS, qui est un système d'exploitation temps-réel déterministe orienté sûreté de fonctionnement. Le flot de conception proposé dans cette thèse est une approche transformationnelle qui permet de conserver les propriétés fonctionnelles des modèles originaux de BIP. Il est composé essentiellement de deux étapes. La première étape, paramétrée par un mapping de tâche défini par l'utilisateur, permet de transformer un modèle BIP en un modèle plus restreint qui représente une description haut niveau des implémentations basées sur des primitives de communication TT. La deuxième étape permet la génération du code pour la plateforme PharOS à partir de ce modèle restreint.Un ensemble d'outils a été développé dans cette thèse afin d'automatiser la plupart des étapes du flot de conception proposé. Ceci a permis de tester cette approche sur deux cas d'étude industriels ; un simulateur de vol et un relais de protection moyenne tension. Dans les deux applications, on vise à comparer les fonctionnalités du modèle BIP avec celles du modèle intermédiaire et du code généré. On fait varier les stratégies de mapping de tâche dans la première application, afin de tester leur impact sur le code généré. Dans la deuxième application, on étudie l'impact de la transformation sur le code généré en comparant quelques aspects de performance du code générer avec ceux d'une version de l'application qui a été développée manuellement. / In hard real-time embedded systems, design and specification methods and their associated tools must allow development of temporally deterministic systems to ensure their safety. To achieve this goal, we are specifically interested in methodologies based on the Time-Triggered (TT) paradigm. This paradigm allows preserving by construction number of properties, in particular, end-to-end real-time constraints. However, ensuring correctness and safety of such systems remains a challenging task. Existing development tools do not guarantee by construction specification respect. Thus, a-posteriori verification of the application is generally a must. With the increasing complexity of embedded applications, their a-posteriori validation becomes, at best, a major factor in the development costs and, at worst, simply impossible. It is necessary, therefore, to define a method that allows the development of correct-by-construction systems while simplifying the specification process.High-level component-based design frameworks that allow design and verification of hard real-time systems are very good candidates for structuring the specification process as well as verifying the high-level model.The goal of this thesis is to couple a high-level component-based design approach based on the BIP (Behaviour-Interaction-Priority) framework with a safety-oriented real-time execution platform implementing the TT approach (the PharOS Real-Time Operating System). To this end, we propose an automatic transformation process from BIPmodels into applications for the target platform (i.e. PharOS).The process consists in a two-step semantics-preserving transformation. The first step transforms a BIP model coupled to a user-defined task mapping into a restricted one, which lends itself well to an implementation based on TT communication primitives. The second step transforms the resulting model into the TT implementation provided by the PharOS RTOS.We provide a tool-flow that automates most of the steps of the proposed approach and illustrate its use on an industrial case study for a flight Simulator application and a medium voltage protection relay application. In both applications, we compare functionalities of both original, intermediate and final model in order to confirm the correctness of the transformation. For the first application, we study the impact of the task mapping on the generated implementation. And for the second application, we study the impact of the transformation on some performance aspects compared to a manually written version.
243

Factors predicting success in the final qualifying examination for chartered accountants

Wessels, Sally 11 1900 (has links)
Anyone desiring to qualify as an accountant or auditor is required to pass an examination as approved by the Public Accountants' and Auditors' Board to establish whether candidates have attained the required standard of academic knowledge in terms of the syllabi laid down by the Board, as well as whether they are able to apply that knowledge in practice (P AAB, 1995). However each year many students fail this very important examination. The reasons for this are not clear and the purpose of this research is to determine whether: personality; vocational interests; intelligence; matriculation Mathematics and home language (English/ Afrikaans) results, predict success in the QE, by comparing a group of successful and unsuccessful QE candidates. The logistic regression, discriminant analysis and t-test statistical procedures, indicated that: warmth (A), liveliness (F), rule-consciousness (G), social boldness (H), apprehension (0), self-reliance (Q2), perfectionism (Q3), tension (Q4), computational interest, social services interest, mechanical interest, Mental Alertness and matriculation home language, are significant factors to consider when identifying candidates likely to be successful in the QE. / Industrial and Organisational Psychology / MCOM (Industrial Psychology)
244

Méthode de modélisation et de raffinement pour les systèmes hétérogènes. Illustration avec le langage System C-AMS / Study and development of a AMS design-flow in SytemC : semantic, refinement and validation

Paugnat, Franck 25 October 2012 (has links)
Les systèmes sur puces intègrent aujourd’hui sur le même substrat des parties analogiques et des unités de traitement numérique. Tandis que la complexité de ces systèmes s’accroissait, leur temps de mise sur le marché se réduisait. Une conception descendante globale et coordonnée du système est devenue indispensable de façon à tenir compte des interactions entre les parties analogiques et les partis numériques dès le début du développement. Dans le but de répondre à ce besoin, cette thèse expose un processus de raffinement progressif et méthodique des parties analogiques, comparable à ce qui existe pour le raffinement des parties numériques. L'attention a été plus particulièrement portée sur la définition des niveaux analogiques les plus abstraits et à la mise en correspondance des niveaux d’abstraction entre parties analogiques et numériques. La cohérence du raffinement analogique exige de détecter le niveau d’abstraction à partir duquel l’utilisation d’un modèle trop idéalisé conduit à des comportements irréalistes et par conséquent d’identifier l’étape du raffinement à partir de laquelle les limitations et les non linéarités aux conséquences les plus fortes sur le comportement doivent être introduites. Cette étape peut être d’un niveau d'abstraction élevé. Le choix du style de modélisation le mieux adapté à chaque niveau d'abstraction est crucial pour atteindre le meilleur compromis entre vitesse de simulation et précision. Les styles de modélisations possibles à chaque niveau ont été examinés de façon à évaluer leur impact sur la simulation. Les différents modèles de calcul de SystemC-AMS ont été catégorisés dans cet objectif. Les temps de simulation obtenus avec SystemC-AMS ont été comparés avec Matlab Simulink. L'interface entre les modèles issus de l'exploration d'architecture, encore assez abstraits, et les modèles plus fin requis pour l'implémentation, est une question qui reste entière. Une bibliothèque de composants électroniques complexes décrits en SystemC-AMS avec le modèle de calcul le plus précis (modélisation ELN) pourrait être une voie pour réussir une telle interface. Afin d’illustrer ce que pourrait être un élément d’une telle bibliothèque et ainsi démontrer la faisabilité du concept, un modèle d'amplificateur opérationnel a été élaboré de façon à être suffisamment détaillé pour prendre en compte la saturation de la tension de sortie et la vitesse de balayage finie, tout en gardant un niveau d'abstraction suffisamment élevé pour rester indépendant de toute hypothèse sur la structure interne de l'amplificateur ou la technologie à employer. / Systems on Chip (SoC) embed in the same chip analogue parts and digital processing units. While their complexity is ever increasing, their time to market is becoming shorter. A global and coordinated top-down design approach of the whole system is becoming crucial in order to take into account the interactions between the analogue and digital parts since the beginning of the development. This thesis presents a systematic and gradual refinement process for the analogue parts comparable to what exists for the digital parts. A special attention has been paid to the definition of the highest abstracted analogue levels and to the correspondence between the analogue and the digital abstraction levels. The analogue refinement consistency requires to detect the abstraction level where a too idealised model leads to unrealistic behaviours. Then the refinement step consist in introducing – for instance – the limitations and non-linearities that have a strong impact on the behaviour. Such a step can be done at a relatively high level of abstraction. Correctly choosing a modelling style, that suits well an abstraction level, is crucial to obtain the best trade-off between the simulation speed and the accuracy. The modelling styles at each abstraction level have been examined to understand their impact on the simulation. The SystemC-AMS models of computation have been classified for this purpose. The SystemC-AMS simulation times have been compared to that obtained with Matlab Simulink. The interface between models arisen from the architectural exploration – still rather abstracted – and the more detailed models that are required for the implementation, is still an open question. A library of complex electronic components described with the most accurate model of computation of SystemC-AMS (ELN modelling) could be a way to achieve such an interface. In order to show what should be an element of such a library, and thus prove the concept, a model of an operational amplifier has been elaborated. It is enough detailed to take into account the output voltage saturation and the finite slew rate of the amplifier. Nevertheless, it remains sufficiently abstracted to stay independent from any architectural or technological assumption.
245

Génération de modèles de haut niveau enrichis pour les systèmes hétérogènes et multiphysiques / Generating high level enriched models for heterogeneous and muliphysics systems

Bousquet, Laurent 29 January 2014 (has links)
Les systèmes sur puce sont de plus en plus complexes : ils intègrent des parties numériques, desparties analogiques et des capteurs ou actionneurs. SystemC et son extension SystemC AMSpermettent aujourd’hui de modéliser à haut niveau d’abstraction de tels systèmes. Ces outilsconstituent de véritables atouts dans une optique d’étude de faisabilité, d’exploration architecturale etde vérification du fonctionnement global des systèmes complexes hétérogènes et multiphysiques. Eneffet, les durées de simulation deviennent trop importantes pour envisager les simulations globales àbas niveau d’abstraction. De plus, les simulations basées sur l’utilisation conjointe de différents outilsprovoquent des problèmes de synchronisation. Les modèles de bas niveau, une fois crées par lesspécialistes des différents domaines peuvent toutefois être abstraits afin de générer des modèles dehaut niveau simulables sous SystemC/SystemC AMS en des temps de simulation réduits. Une analysedes modèles de calcul et des styles de modélisation possibles est d’abord présentée afin d’établir unlien avec les durées de simulation, ceci pour proposer un style de modélisation en fonction du niveaud’abstraction souhaité et de l’ampleur de la simulation à effectuer. Dans le cas des circuits analogiqueslinéaires, une méthode permettant de générer automatiquement des modèles de haut niveaud’abstraction à partir de modèles de bas niveau a été proposée. Afin d’évaluer très tôt dans le flot deconception la consommation d’un système, un moyen d’enrichir les modèles de haut niveaupréalablement générés est présenté. L’attention a ensuite été portée sur la modélisation à haut niveaudes systèmes multiphysiques. Deux méthodes y sont discutées : la méthode consistant à utiliser lecircuit équivalent électrique puis la méthode basée sur les bond graphs. En particulier, nous proposonsune méthode permettant de générer un modèle équivalent au bond graph à partir d’un modèle de basniveau. Enfin, la modélisation d’un système éolien est étudiée afin d’illustrer les différents conceptsprésentés dans cette thèse. / Systems on chip are more and more complex as they now embed not only digital and analog parts, butalso sensors and actuators. SystemC and its extension SystemC AMS allow the high level modeling ofsuch systems. These tools are efficient for feasibility study, architectural exploration and globalverification of heterogeneous and multiphysics systems. At low level of abstraction, the simulationdurations are too important. Moreover, synchronization problems appear when cosimulations areperformed. It is possible to abstract the low level models that are developed by the specialists of thedifferent domains to create high level models that can be simulated faster using SystemC/SystemCAMS. The models of computation and the modeling styles have been studied. A relation is shownbetween the modeling style, the model size and the simulation speed. A method that generatesautomatically the high level model of an analog linear circuit from its low level representation isproposed. Then, it is shown how to include in the high level model some information allowing thepower consumption estimation. After that, the multiphysics systems modeling is studied. Twomethods are discussed: firstly, the one that uses the electrical equivalent circuit, then the one based onthe bond graph approach. It is shown how to generate a bond graph equivalent model from a low levelrepresentation. Finally, the modeling of a wind turbine system is discussed in order to illustrate thedifferent concepts presented in this thesis.
246

Produção urbana da cidade contemporânea: os rebatimentos morfológicos dos condomínios urbanísticos e loteamentos fechados de alto padrão da Avenida Professor João Fiúsa e Rodovia José Fregonesi no tecido urbano de Ribeirão Preto/SP / Production of city urban contemporary: repercussions morphological and lots of condominiums urban closed high standard of Professor John Avenue and Highway Fiusa Fregonesi Joseph in Urban Fabric Ribeirão Preto/SP

Tânia Maria Bulhões Figueira 25 April 2013 (has links)
O trabalho analisa as dinâmicas territoriais contemporâneas e os fluxos de metropolização promovidos em áreas de expansão urbana, tendo como estudo Ribeirão Preto, cidade de médio porte localizada no interior do estado de São Paulo/Brasil. O município, com área de 650,955 Km², apresenta 604.682 habitantes, conforme o censo de 2010 promovido pelo IBGE-Instituto Brasileiro de Geografia e Estatística. É um dos principais parques agroindustriais brasileiros compondo a terceira região de maior relevância econômica do estado de São Paulo - principal região econômica do país -, com um produto interno bruto per capita igual a 28.100,52 reais [sendo o produto interno bruto per capita brasileiro igual a 21.252,41 reais, segundo o mesmo censo]. O período entre a década de 1980 e os anos 2000 foi marcado por um extraordinário desenvolvimento econômico da região de Ribeirão Preto com desdobramentos na urbanização de seu território contíguo. De forma semelhante ao que ocorreu nas principais metrópoles brasileiras, a cidade passou a produzir e experimentar situações urbanas decorrentes das novas lógicas de organização econômica e social, com particular articulação em relação aos interesses imobiliários. A lógica do mercado imobiliário, coligada ao modelo de acumulação vigente nos últimos quarenta anos - marcado pela financeirização da economia -, possui rebatimentos na configuração do espaço urbano. A privatização de frações consideráveis do território, principalmente em áreas de expansão, apresenta-se como produto e preceito da conformação espacial atual, colaborando para o acirramento de processos de segregação morfológica e social dos ambientes urbanos e de transformação dos valores públicos e culturais. Este modelo de expansão, cindido da conformação histórica da cidade e alimentado pela flexibilização da legislação urbana, cria condições para o surgimento de problemas que associam um desenho urbano tributário da iniciativa privada a processos de gentrification. A resultante é uma urbanização dispersa, contudo, conectada à estrutura urbana existente por um viário que estimula o transporte individual em detrimento de sistemas coletivos. O problema de tal constituição urbana não está no fato de responder às demandas provenientes do novo modelo de acumulação, mas sim de reduzir-se apenas a isso, voltando-se exclusivamente às dinâmicas econômicas e, portanto, estando divorciada das dimensões políticas e de cidadania da sociedade. O trabalho busca compreender as novas produções em curso dos espaços urbanos, investigando as privatizações de áreas significativas do território de Ribeirão Preto: os condomínios urbanísticos e loteamentos fechados de alto padrão [de usos habitacionais e mistos] localizados em áreas de expansão urbana, particularmente implantados em regiões adjacentes à Avenida Professor João Fiúsa e à Rodovia José Fregonesi [SP-328], os quais parecem prescindir do conceito de cidade conformada historicamente, produzindo no limite [e contraditoriamente] um urbanismo sem cidade. / The work analyzes the current territorial dynamics and its metropolization flows at urban growth areas. The city chosen as the object of study was Ribeirão Preto, a São Paulo state inner city, which is classified as a medium-sized one. It has a population of 604.682 inhabitants in a 650,955 Km² area according to the 2010 census. Well known as one of the main agribusiness centers in the country, Ribeirão Preto represents the third most important economy of São Paulo state and plays a major role in the Brazilian economy. Contrasting with Brazil GDP of R$21.252,41, Ribeirão Preto has a GDP of R$28.100,52, both values per capita. Between 1980 and 2000 decades a remarkable economic development and urbanization improvement were noticed at Ribeirão Preto. As other major Brazilian metropolis, the city began to produce and experience urban situations derived from novel economic and social logics of organization with a particular articulation connected to real estate interests. The property market logic linked to an accumulation model - marked by economy financialisation -, which has been applied in the last forty years, has reverberated on urban space structural configuration. The privatization of significant fractions of the urban territory is presented as a product and provision of current spaces conformation, especially in their expansion areas. It contributes to worsening some urban processes with regards to morphological and social segregation and the transformation of public and cultural values. This urban expansion model is interpreted as one whose historical values are diminished or even not existent. It is fueled by the easing of urban legislation and increases problems involving an urban design derived from private initiatives to the gentrification process. The result is an urban sprawl which is connected to the urban sites through highways systems that stimulates individualities rather than a sense of collectiveness. The problem highlighted by this urban constitution is not only related to its response of economical demands, but it is reduced exclusively to that. This urban model has been accumulating several negative critiques, particularly concerning the divorce between the political and social dimensions of society. Based on it, the work aims the understanding of the redefinition of urban spaces. Hence, some urban private areas that exemplify this dynamic were selected: the high level private condominiums located at expansion areas, especially on Professor João Fiúsa Avenue and José Fregonesi Highway, which seems to abstract the whole concept of a city shaped historically, producing at most [and contradictorily] urban spaces without an actual city.
247

Desenvolvimento de processo de obtenção de nanopartículas de sílica a partir de resíduo de fonte renovável e incorporação em polímero termoplástico para a fabricação de nanocompósito / Development of silica nanoparticles obtaintion process from renewable source waste and its incorporation in thermoplastic polymer for manufacturing a nanocomposite

ORTIZ, ANGEL V. 25 May 2017 (has links)
Submitted by Marco Antonio Oliveira da Silva (maosilva@ipen.br) on 2017-05-25T11:35:08Z No. of bitstreams: 0 / Made available in DSpace on 2017-05-25T11:35:08Z (GMT). No. of bitstreams: 0 / A tecnologia de nanocompósitos é aplicável a uma vasta gama de polímeros termoplásticos e termofixos. A utilização de subprodutos da cana de açúcar tem sido extensivamente estudada como fonte de reforços para os nanocompósitos. O bagaço da cana é largamente utilizado na cogeração de energia e, como resultado da queima deste material, são produzidas milhões de toneladas de cinzas. Para este trabalho, sílica contida nas cinzas do bagaço da cana de açúcar foi extraída por método químico e método térmico. O método térmico se mostrou mais eficiente levando a uma pureza de mais de 93 % em sílica, enquanto o método químico gerou sílica bastante contaminada com cloro e sódio provenientes dos reagentes da extração. As partículas de sílica obtidas foram avaliadas por espalhamento de luz dinâmico (DSL) e apresentaram tamanho médio de 12 μm. Estas partículas foram submetidas à moagem em moinho de bolas e na sequência a tratamento sonoquímico em meio líquido. As partículas de sílica tratadas no processo sonoquímico a 20 kHz, potência de 500 W e 90 minutos tiveram suas dimensões reduzidas a escala nanométrica da ordem de dezenas de nanômetros. A nanossílica obtida foi então incorporada como reforço em polietileno de alta densidade (HDPE). Ensaios mecânicos e termo-mecânicos mostram ganhos de propriedades mecânicas, com exceção da propriedade de resistência ao impacto. O ensaio de deflexão térmica (HDT) mostrou que a incorporação deste reforço no HDPE levou a um pequeno aumento nesta propriedade relação ao HDPE puro. A cristalinidade dos nanocompósitos gerados foi avaliada por meio de calorimetria exploratória diferencial (DSC) e observou-se um decréscimo de cristalinidade do material quando a incorporação de reforço foi de 3%. O material irradiado a 250 kGy com feixe de elétrons mostra ganhos acentuados na principais propriedades do mesmo, principalmente devido ao alto nível de reticulação do HDPE irradiado. / Tese (Doutorado em Tecnologia Nuclear) / IPEN/T / Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
248

Renda e gastos com educação de nível superior

Thomé, Francisco Augusto Seixas 31 May 2012 (has links)
Submitted by Francisco Augusto Seixas Thomé (francisco.thome@fgv.br) on 2013-01-03T21:38:35Z No. of bitstreams: 1 Thomé.pdf: 1674130 bytes, checksum: b3ec4566d7dfee1d5ea069cfedaa082a (MD5) / Approved for entry into archive by Vitor Souza (vitor.souza@fgv.br) on 2013-01-15T13:09:01Z (GMT) No. of bitstreams: 1 Thomé.pdf: 1674130 bytes, checksum: b3ec4566d7dfee1d5ea069cfedaa082a (MD5) / Made available in DSpace on 2013-02-04T18:21:25Z (GMT). No. of bitstreams: 1 Thomé.pdf: 1674130 bytes, checksum: b3ec4566d7dfee1d5ea069cfedaa082a (MD5) Previous issue date: 2012-05-31 / This study intends to verify how inelastic is the spending of money, with higher education in relation to the income. We found that families with higher income, spend more on that kind of education than those of lower. We observed also in Brazil, that as higher the incomes more is spent on high level education, but this correlation is inelastic, with an increase of 1,0% on the month income, carries 0,31% increase in monthly expenditure on tertiary education. In relation to the amount spent on education, the family income, we may observe that when the family income increases in certain geographic regions, a small part of it is reserved for high level education than in other regions, as we could verify. This suggests that families with high income levels, will not be affected when deciding to invest more in education to have a better quality of education compared to others. We may observe that among the brazilian regions, there are differences that often come from the number of residents and educational differences, usually in the same family. In families with higher income, we found often that part of this increase was forwarded to other activities, and this will not change so much its decision on investing in university education. It was verified that this occurs in the Southeast and South, because these locations revenue is above the national average and the number of residents per household is relatively lower. We also observed that in these regions the ratio of student is higher, confirming that they are the ones with better economic conditions and thus, they have better opportunity to invest in education. / O estudo em questão pretende verificar, o quão é inelástico o gasto com a educação de nível superior em relação à renda. Verificamos que os domicílios com maior renda há um gasto maior dos que os de menor renda. O que também foi verificado no Brasil é que, quanto maior a renda, maior é o gasto com educação de nível superior, porém esta correlação é inelástica, ou seja, com um aumento de 1,0% na renda mensal, acarreta 0,31% de aumento na despesa mensal com educação de nível superior. Quanto à proporção de gastos com educação na renda domiciliar, há evidências que com o aumento da renda em domicílios de certas Regiões Geográficas, há uma destinação de um percentual menor de sua renda para com os gastos em educação superior do que em outras regiões, conforme foi verificado. Isto leva a crer que em domicílios com um nível de renda maior, esta alteração de renda não influenciará tanto em sua decisão de investir mais em educação para ter um curso universitário de melhor qualidade de ensino. Pode-se observar que entre as regiões brasileiras, há diferenças que muitas vezes são oriundas da quantidade de moradores e diferenças educacionais, muitas vezes no próprio domicílio. Nos domicílios de maior renda, em um grande número de vezes, parte deste incremento de renda é alocada para outras atividades, pois isto não alterará em muito sua decisão relativa ao investimento no ensino superior. Foi verificado que isto ocorre nas Regiões Sudeste e Sul, pois nesses locais a renda é superior à média nacional e a quantidade de moradores por domicílio é relativamente menor. Observamos também que nestas regiões a relação de vagas por estudante é maior, corroborando que como são as regiões mais ricas, elas têm maior condição de investir na educação de nível superior.
249

Evaluating Vivado High-Level Synthesis on OpenCV Functions for the Zynq-7000 FPGA

Johansson, Henrik January 2015 (has links)
More complex and intricate Computer Vision algorithms combined with higher resolution image streams put bigger and bigger demands on processing power. CPU clock frequencies are now pushing the limits of possible speeds, and have instead started growing in number of cores. Most Computer Vision algorithms' performance respond well to parallel solutions. Dividing the algorithm over 4-8 CPU cores can give a good speed-up, but using chips with Programmable Logic (PL) such as FPGA's can give even more. An interesting recent addition to the FPGA family is a System on Chip (SoC) that combines a CPU and an FPGA in one chip, such as the Zynq-7000 series from Xilinx. This tight integration between the Programmable Logic and Processing System (PS) opens up for designs where C programs can use the programmable logic to accelerate selected parts of the algorithm, while still behaving like a C program. On that subject, Xilinx has introduced a new High-Level Synthesis Tool (HLST) called Vivado HLS, which has the power to accelerate C code by synthesizing it to Hardware Description Language (HDL) code. This potentially bridges two otherwise very separate worlds; the ever popular OpenCV library and FPGAs. This thesis will focus on evaluating Vivado HLS from Xilinx primarily with image processing in mind for potential use on GIMME-2; a system with a Zynq-7020 SoC and two high resolution image sensors, tailored for stereo vision.
250

Calcul de probabilités d'événements rares liés aux maxima en horizon fini de processus stochastiques / Calculation of probabilities of rare events related to the finite-horizon maxima of stochastic processes

Shao, Jun 12 December 2016 (has links)
Initiée dans le cadre d’un projet ANR (le projet MODNAT) ciblé sur la modélisation stochastique de phénomènes naturels et la quantification probabiliste de leurs effets dynamiques sur des systèmes mécaniques et structuraux, cette thèse a pour objet le calcul de probabilités d’événements rares liés aux maxima en horizon fini de processus stochastiques, avec prise en compte des quatre contraintes imposées suivantes : (1) l’ensemble des processus considérés doit contenir les quatre grandes catégories de processus rencontrés en dynamique aléatoire, à savoir les gaussiens stationnaires, les gaussiens non stationnaires, les non gaussiens stationnaires et les non gaussiens non stationnaires ; (2) ces processus doivent pouvoir être, soit décrits par leurs lois, soit fonctions de processus décrits par leurs lois, soit solutions d’équations différentielles stochastiques, soit même solutions d’inclusions différentielles stochastiques ; (3) les événements en question sont des dépassements de seuils très élevés par les maxima en horizon fini des processus considérés et ces événements sont de très faible occurrence, donc de très faible probabilité (de l’ordre de 10 −4 à 10 −8 ), du fait de la valeur élevée des seuils ; et enfin (4) le recours à une approche Monte-Carlo pour effectuer ce type de calcul doit être banni, car trop chronophage compte tenu des contraintes précédentes. Pour résoudre un tel problème, dont le domaine d’intérêt s’étend bien au delà de la mécanique probabiliste et de la fiabilité structurale (on le rencontre notamment dans tous les secteurs scientifiques en connexion avec la statistique des valeurs extrêmes, comme par exemple les mathématiques financières ou les sciences économiques) une méthode innovante est proposée, dont l’idée maîtresse est née de l’analyse des résultats d’une étude statistique de grande ampleur menée dans le cadre du projet MODNAT. Cette étude, qui porte sur l’analyse du comportement des valeurs extrêmes des éléments d’un vaste ensemble de processus, a en effet mis en évidence deux fonctions germes dépendant explicitement de la probabilité cible (la première en dépendant directement, la seconde indirectement via une probabilité conditionnelle auxiliaire elle-même fonction de la probabilité cible) et possédant des propriétés de régularité remarquables et récurrentes pour tous les processus de la base de données, et c’est sur l’exploitation conjointe de ces propriétés et d’un principe d’approximation bas niveau-extrapolation haut niveau que s’appuie la construction de la méthode. Deux versions de celle-ci en sont d’abord proposées, se distinguant par le choix de la fonction germe et dans chacune desquelles cette fonction est approximée par un polynôme. Une troisième version est également développée, basée sur le formalisme de la deuxième version mais utilisant pour la fonction germe une approximation de type "fonction de survie de Pareto". Les nombreux résultats numériques présentés attestent de la remarquable efficacité des deux premières versions. Ils montrent également que celles-ci sont de précision comparable. La troisième version, légèrement moins performante que les deux premières, présente quant à elle l’intérêt d’établir un lien direct avec la théorie des valeurs extrêmes. Dans chacune de ses trois versions, la méthode proposée constitue à l’évidence un progrès par rapport aux méthodes actuelles dédiées à ce type de problème. De par sa structure, elle offre en outre l’avantage de rester opérationnelle en contexte industriel. / Initiated within the framework of an ANR project (the MODNAT project) targeted on the stochastic modeling of natural hazards and the probabilistic quantification of their dynamic effects on mechanical and structural systems, this thesis aims at the calculation of probabilities of rare events related to the maxima of stochastic processes over a finite time interval, taking into account the following four constraints : (1) the set of considered processes must contain the four main categories of processes encountered in random dynamics, namely stationary Gaussian, non-stationary Gaussian, stationary non-Gaussian and non-stationary non-Gaussian ones ; (2) these processes can be either described by their distributions, or functions of processes described by their distributions, or solutions of stochastic differential equations, or solutions of stochastic differential inclusions ; (3) the events in question are crossings of high thresholds by the maxima of the considered processes over finite time intervals and these events are of very weak occurrence, hence of very small probability, due to the high size of thresholds ; and finally (4) the use of a Monte Carlo approach to perform this type of calculation must be proscribed because it is too time-consuming given the above constraints. To solve such a problem, whose field of interest extends well beyond probabilistic mechanics and structural reliability (it is found in all scientific domains in connection with the extreme values theory, such as financial mathematics or economical sciences), an innovative method is proposed, whose main idea emerged from the analysis of the results of a large-scale statistical study carried out within the MODNAT project. This study, which focuses on analyzing the behavior of the extreme values of elements of a large set of processes, has indeed revealed two germ functions explicitly related to the target probability (the first directly related, the second indirectly via a conditional auxiliary probability which itself depend on the target probability) which possess remarkable and recurring regularity properties for all the processes of the database, and the method is based on the joint exploitation of these properties and a "low level approximation-high level extrapolation" principle. Two versions of this method are first proposed, which are distinguished by the choice of the germ function and in each of which the latter is approximated by a polynomial. A third version has also been developed. It is based on the formalism of the second version but which uses as germ function an approximation of "Pareto survival function" type. The numerous presented numerical results attest to the remarkable effectiveness of the first two versions. They also show that they are of comparable precision. The third version, slightly less efficient than the first two, presents the interest of establishing a direct link with the extreme values theory. In each of its three versions, the proposed method is clearly an improvement compared to current methods dedicated to this type of problem. Thanks to its structure, it also offers the advantage of remaining operational in industrial context.

Page generated in 0.0428 seconds