• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 11
  • 8
  • 6
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 14
  • 13
  • 13
  • 12
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Procedimento conceitual para a avaliação da qualidade de vida no trabalho em instituições de ensino superior públicas

SOARES, Veruska Gonçalves 30 September 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-27T14:29:54Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Veruska Goncalves Soares_Dissertacao_versão final.pdf: 2714460 bytes, checksum: ea957cbb734151791e3e4d84a5c03d18 (MD5) / Made available in DSpace on 2017-04-27T14:29:54Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Veruska Goncalves Soares_Dissertacao_versão final.pdf: 2714460 bytes, checksum: ea957cbb734151791e3e4d84a5c03d18 (MD5) Previous issue date: 2016-09-30 / Esta pesquisa levou em conta o enfoque da Teoria das Facetas com o objetivo geral de propor um procedimento conceitual para a avaliação da Qualidade de Vida no Trabalho (QVT) em Instituições de Ensino Superior (IES) públicas, na perspectiva dos funcionários. Para tal, utilizou o Modelo Teórico de QVT de Walton e a Teoria dos Dois Fatores de Herzberg, no intuito de definir categorias conceituais aderentes a esse tipo de avaliação, bem como entrevistas estruturadas para coletar dados com os servidores das 8 (oito) secretarias departamentais do Centro de Artes e Comunicação (CAC) da Universidade Federal de Pernambuco (UFPE), com vistas a examinar empiricamente a validade dessas categorias conceituais inicialmente estabelecidas, e a técnica não-métrica e multidimensional SSA (Similarity Structure Analysis) para interpretá-los. As evidências empíricas confirmaram a quase totalidade das categorias inicialmente propostas como hipóteses, demonstrando a consistência do procedimento conceitual proposto. / This research takes into account the focus of the Facet Theory with the aim of proposing a conceptual procedure for the assessment of Quality of Working Life (QWL) in public higher education institutions in the perspective of staff. For this purpose, Walton´s QWL theoretical model and Herzberg´s Two-Factor Theory have been used in this work in order to define conceptual categories related to this type of assessment. Structured interviews with members of staff from the eight department secretariats of the Center of Arts and Communication at the Federal University of Pernambuco have been conducted for data collection, in order to empirically examine the validity of these conceptual categories, initially established as well as the nonmetric and multidimensional SSA (Similarity Structure Analysis) technique for the interpretation of the data. Empirical evidence has confirmed almost all categories initially proposed as hypotheses, demonstrating the consistency of the proposed conceptual procedure.
22

Novos algoritmos de simulação estocástica com atraso para redes gênicas

Silva, Camillo de Lellis Falcão da 22 May 2014 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-06T13:42:32Z No. of bitstreams: 1 camillodelellisfalcaodasilva.pdf: 1420414 bytes, checksum: f38c14f74131ea594b1e105fbfdb1619 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-06T14:07:56Z (GMT) No. of bitstreams: 1 camillodelellisfalcaodasilva.pdf: 1420414 bytes, checksum: f38c14f74131ea594b1e105fbfdb1619 (MD5) / Made available in DSpace on 2017-06-06T14:07:56Z (GMT). No. of bitstreams: 1 camillodelellisfalcaodasilva.pdf: 1420414 bytes, checksum: f38c14f74131ea594b1e105fbfdb1619 (MD5) Previous issue date: 2014-05-22 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Atualmente, a eficiência dos algoritmos de simulação estocástica para a simulação de redes de regulação gênica (RRG) tem motivado diversos trabalhos científicos. O interesse por tais algoritmos deve-se ao fato de as novas tecnologias em biologia celular — às vezes chamadas de tecnologias de alto rendimento (high throughput technology cell biology) — te-rem mostrado que a expressão gênica é um processo estocástico. Em RRG com atrasos, os algoritmos para simulação estocástica existentes possuem problemas — como crescimento linear da complexidade assintótica, descarte excessivo de números aleatórios durante a si-mulação e grande complexidade de codificação em linguagens de programação — que podem resultar em um baixo desempenho em relação ao tempo de processamento de simulação de uma RRG. Este trabalho apresenta um algoritmo para simulação estocástica que foi chamado de método da próxima reação simplificado (SNRM). Esse algoritmo mostrou-se mais eficiente que as outras abordagens existentes para simulações estocásticas realizadas com as RRGs com atrasos. Além do SNRM, um novo grafo de dependências para reações com atrasos também é apresentado. A utilização desse novo grafo, que foi nomeado de delayed dependency graph (DDG), aumentou consideravelmente a eficiência de todas as versões dos algoritmos de simulação estocástica com atrasos apresentados nesse trabalho. Finalmente, uma estrutura de dados que recebeu o nome de lista ordenada por hashing é utilizada para tratar a lista de produtos em espera em simulações de RRGs com atrasos. Essa estrutura de dados também se mostrou mais eficiente que uma heap em todas as simulações testadas. Com todas as melhorias mencionadas, este trabalho apresenta um conjunto de estratégias que contribui de forma efetiva para o desempenho dos algoritmos de simulação estocástica com atrasos de redes de regulação gênica. / Recently, the time efficiency of stochastic simulation algorithms for gene regulatory networks (GRN) has motivated several scientific works. Interest in such algorithms is because the new technologies in cell biology — called high-throughput technologies cell biology — have shown that gene expression is a stochastic process. In GRN with delays, the existing algorithms for stochastic simulation have some drawbacks — such as linear growth of complexity, excessive discard of random numbers, and the coding in a programming language can be hard — that result in poor performance during the simulation of very large GRN. This work presents an algorithm for stochastic simulation of GRN. We called it simplified next reaction method (SNRM). This algorithm was more efficient than other existing algorithms for stochastically simulation of GRN with delays. Besides SNRM, a new dependency graph for delayed reactions is also presented. The use of this new graph, which we named it delayed dependency graph (DDG), greatly increased the efficiency of all versions of the algorithms for stochastic simulation with delays presented in this work. Finally, a data structure that we named hashing sorted list is used to handle the waiting list of products in simulations of GRN with delays. This data structure was also more efficient than a heap in all tested simulations. With all the improvements mentioned, this work presents a set of strategies that contribute effectively to increasing performance of stochastic simulation algorithms with delays for gene regulatory networks.
23

Generická syntéza invariantů v programu založená na šablonách / Generic Template-Based Synthesis of Program Abstractions

Marušák, Matej January 2019 (has links)
Cieľom tejto práce je návrh a implementácia generického strategy solveru pre nástroj 2LS. 2LS je analyzátor na statickú verifikáciu programov napísaných v jazyku C. Verifikovaný program je za využita abstraktnej interpretácie analyzovaný SMT solverom. Prevod z ab- straktného stavu programu do logickej formule, s ktorou vie pracovať SMT solver vykonáva komponenta nazývaná strategy solver. Aktuálne pre každú doménu existuje jeden takýto solver. Navrhované riešenie vytvára jeden obecný strategy solver, ktorý zjednodušuje tvorbu nových domén. Zároveň navrhovaný spôsob umožnuje prevedenie existujúcich domén a teda zmenšuje program analyzátora.
24

[pt] PREVISÃO DE VELOCIDADE DO VENTO UTILIZANDO SINGULAR SPECTRUM ANALYSIS / [en] WIND SPEED PREDICTION USING SINGULAR SPECTRUM ANALYSIS

LARISSA MORAES DANTAS CAMPOS 14 September 2020 (has links)
[pt] Uma mudança de paradigma no mundo todo foi ocasionada pelo aumento da preocupação quanto ao uso de combustíveis fósseis usados como principal fonte de geração elétrica, a correspondente mudança climática e os danos ambientais crescentes. Nos últimos anos, a energia eólica apresentou um crescimento incessante como alternativa sustentável para a produção de eletricidade, o que pode ser observado a partir do crescimento de sua capacidade instalada mundialmente. O Brasil está entre os dez países que tem as maiores capacidades instaladas, e apresentou 9,42 por cento de geração de energia elétrica advinda da fonte eólica em 2019. No entanto, a aleatoriedade e a intermitência do vento são os maiores desafios na integração dessa fonte no sistema de energia. Diante deste contexto, esta pesquisa propõe a aplicação da técnica Singular Spectrum Analysis (SSA) como método de previsão para uma série de velocidade eólica no Brasil, fazendo uma análise comparativa de modelos SSA considerando diferentes horizontes de previsão e conjunto de treinamento para diferentes dias de previsão, com diferentes tamanhos de série temporal. Deste modo, é comparada a série temporal do ano todo com somente o último mês desta série para prever os últimos sete dias do mês de dezembro. Os resultados dessa aplicação mostram que para a maioria dos dias a utilização do ano todo como conjunto de treinamento obteve melhor desempenho, indicando que o uso da técnica SSA pode ser uma alternativa para séries temporais com uma grande quantidade de dados. / [en] A paradigm shift around the world was caused by increased concern about the use of fossil fuels used as the main source of electricity generation, the corresponding climate change and increasing environmental damage. In recent years, wind energy has shown steady growth as a sustainable alternative for electricity production, which can be seen from the growth of its installed capacity worldwide. Brazil is among the ten countries that have the largest installed capacities, and presented 9.42 percent of electricity generation from the wind source in the last year. However, wind randomness and intermittency are the biggest challenges in integrating this source into the energy system. In this context, this research proposes the application of the Singular Spectrum Analysis (SSA) technique as a forecast method for a series of wind speed in Brazil, making a comparative analysis of SSA models considering different forecast horizons and training set for different days forecast, with different time series sizes. In this way, the time series of the whole year is compared with only the last month of this series to forecast the last seven days of the month of December. The results of this application show that for most days the use of the whole year as a training set obtained better performance, indicating that the use of the SSA technique can be an alternative for time series with a large amount of data.
25

Nexus approach and environmental resource governance in Sub‑Saharan Africa: a systematic review

Kimengsi, Jude Ndzifon, Owusu, Raphael, Balgah, Roland Azibo 06 June 2024 (has links)
Sub-Saharan Africa (SSA) is replete with significant environmental resources including forests, water, land, and energy; although its transition to a bio-resource economy is yet to be actualized. Consequently, there are limited socio-economic gains from resource valorization. These challenges which stall progress towards the attainment of several interlinked sustainable development goals, are rooted, among others in resource governance defects. Furthermore, the persistence of knowledge fragmentation on resource governance shades possibilities for an in-depth theorizing of the nexus approach. In this light, two questions beg for answers: (i) To what extent are governance indicators captured in empirical studies on the nexus approach in SSA? (ii) What questions and approaches should inform future research on the nexus approach in SSA? To answer these questions, this paper systematically reviews 100 peer-reviewed articles (with 154 cases) that address governance questions in nexus studies within the broad framework of bioeconomy transitioning in SSA. Using the PROFOR analytical framework, our analysis reveals the following: (1) Although sub-regional variations exist in the application of nexus thinking, the overall emphasis in SSA is on first-level resource transformation. (2) With only 5% of studies explicitly mentioning the nexus approach, there is a strong indication for nexus thinking to be prioritized in future research. (3) While efficiency is the most recurrent in the literature (69%), its assurance in resource nexus and transformation is insignificant. (4) Interlinked questions of equity, participation, transparency, and conflict management have not been sufficiently addressed in studies on the nexus approach. The paper suggests an urgent need for in-depth, multi-country, and interdisciplinary research on these governance parameters in the nexus approach, as prerequisite to advancing the science–policy intercourse in nexus thinking in SSA.
26

The role regime type plays with respect to intelligence cooperation: the case of South Africa and Israel

Walbrugh, Dean John January 2019 (has links)
Magister Administrationis - MAdmin / This thesis explores the intelligence cooperation exhibited between South Africa and Israel during the time periods of apartheid (1948-1994) and post-apartheid (1994-2015). Regime type is explored as a factor impacting on the intelligence relationship in both periods. Pertinent to the case study is the fact that South Africa and Israel’s regime type shared commonalities during the first period, but not the second. The thesis examines how these commonalities facilitated intelligence cooperation during apartheid, then turns to the question how the change in South Africa’s regime type after 1994 (whilst Israel’s remained the same) impacted on intelligence cooperation. In order to understand the significance of South Africa’s regime change on the intelligence relationship between the two states, a comprehensive theoretical framework is proposed in order to analyse how and why the internal policies of the two states redirected their intelligence relationship. Within this thesis, the concept of regime type is not used in a conventional way, it is framed through a constructivist notion that includes a focus on identity and how this shapes the two states’ intelligence bureaucratic behaviour. This constructivist framing is in turn juxtaposed to two other International Relations (IR) theories, namely: realism and liberalism. This thesis therefore explores how the system of apartheid in South Africa and a system that has been compared to apartheid in Israel brought the two states together on a national interest level. But, what constituted the perceived alignment of national interests and filtered down into a bureaucratic level is better understood through the constructivist notion of culture and identity that actually solidified the relationship. Culture and identity formed the basis of what made the relationship between the two states strong, and as per the focus of this thesis, manifested in intelligence cooperation between the two states that goes over and beyond what Realists would predict. Although liberalism can explain the apartheid relationship better, it cannot explain why the relationship was not severed after apartheid. Since the end of apartheid, the intelligence relationship has been deteriorating, but this has been a gradual process. This study investigates how regime type impact on intelligence cooperation. It applies the three main IR theories in order to explain and understand the post-apartheid South Africa-Israel relationship. It finds that although Realism and Liberalism are useful, interpreting regime type in a constructivist way adds significantly to explanations of the role regime type plays.
27

Décompositions parcimonieuses pour l'analyse avancée de données en spectrométrie pour la Santé / Sparse decompositions for advanced data analysis of hyperspectral data in biological applications

Rapin, Jérémy 19 December 2014 (has links)
La séparation de sources en aveugle (SSA) vise à rechercher des signaux sources inconnus et mélangés de manière inconnue au sein de plusieurs observations. Cette approche très générique et non-supervisée ne fournit cependant pas nécessairement des résultats exploitables. Il est alors nécessaire d’ajouter des contraintes, notamment physiques, afin de privilégier la recherche de sources ayant une structure particulière. La factorisation en matrices positives (non-negative matrix factorization, NMF) qui fait plus précisément l’objet de cette thèse recherche ainsi des sources positives observées au travers de mélanges linéaires positifs.L’ajout de davantage d’information reste cependant souvent nécessaire afin de pouvoir séparer les sources. Nous nous intéressons ainsi au concept de parcimonie qui permet d’améliorer le contraste entre celles-ci tout en produisant des approches très robustes, en particulier au bruit. Nous montrons qu’afin d’obtenir des solutions stables, les contraintes de positivité et la régularisation parcimonieuse doivent être appliqués de manière adéquate. Aussi, l’utilisation de la parcimonie dans un espace transformé potentiellement redondant, permettant de capturer la structure de la plu- part des signaux naturels, se révèle difficile à appliquer au côté de la contrainte de positivité dans l’espace direct. Nous proposons ainsi un nouvel algorithme de NMF parcimonieuse, appelé nGMCA (non-negative Generalized Morphological Component Analysis), qui surmonte ces difficultés via l’utilisation de techniques de calcul proximal. Des expérimentations sur des données simulées montrent que cet algorithme est robuste à une contamination par du bruit additif Gaussien, à l’aide d’une gestion automatique du paramètre de parcimonie. Des comparaisons avec des algorithmes de l’état-de-l’art en NMF sur des données réalistes montrent l’efficacité ainsi que la robustesse de l’approche proposée.Finalement, nous appliquerons nGMCA sur des données de chromatographie en phase liquide - spectrométrie de masse (liquid chromatography - mass spectrometry, LC-MS). L’observation de ces données montre qu’elles sont contaminées par du bruit multiplicatif, lequel détériore grandement les résultats des algorithmes de NMF. Une extension de nGMCA conçue pour prendre en compte ce type de bruit à l’aide d’un a priori non-stationnaire permet alors d’obtenir d’excellents résultats sur des données réelles annotées. / Blind source separation aims at extracting unknown source signals from observations where these sources are mixed together by an unknown process. However, this very generic and non-supervised approach does not always provide exploitable results. Therefore, it is often necessary to add more constraints, generally arising from physical considerations, in order to favor the recovery of sources with a particular sought-after structure. Non-negative matrix factorization (NMF), which is the main focus of this thesis, aims at searching for non-negative sources which are observed through non-negative linear mixtures.In some cases, further information still remains necessary in order to correctly separate the sources. Here, we focus on the sparsity concept, which helps improving the contrast between the sources, while providing very robust approaches, even when the data are contaminated by noise. We show that in order to obtain stable solutions, the non-negativity and sparse constraints must be applied adequately. In addition, using sparsity in a potentially redundant transformed domain could allow to capture the structure of most of natural image, but this kind of regularization proves difficult to apply together with the non-negativity constraint in the direct domain. We therefore propose a sparse NMF algorithm, named nGMCA (non-negative Generalized Morphological Component Analysis), which overcomes these difficulties by making use of proximal calculus techniques. Experiments on simulated data show that this algorithm is robust to additive Gaussian noise contamination, with an automatic control of the sparsity parameter. This novel algorithm also proves to be more efficient and robust than other state-of-the-art NMF algorithms on realistic data.Finally, we apply nGMCA on liquid chromatography - mass spectrometry data. Observation of these data show that they are contaminated by multiplicative noise, which greatly deteriorates the results of the NMF algorithms. An extension of nGMCA was designed to take into account this type of noise, thanks to the use of a non-stationary prior. This extension is then able to obtain excellent results on annotated real data.
28

Étude des problèmes de <i>spilling</i> et <i>coalescing</i> liés à l'allocation de registres en tant que deux phases distinctes

Bouchez, Florent 30 April 2009 (has links) (PDF)
Le but de l'allocation de registres est d'assigner les variables d'un programme aux registres ou de les " spiller " en mémoire s'il n'y a plus de registre disponible. La mémoire est bien plus lente, il est donc préférable de minimiser le spilling. Ce problème est difficile il est étroitement lié à la colorabilité du programme. Chaitin et al. [1981] ont modélisé l'allocation de registres en le coloriage du graphe d'interférence, qu'ils ont prouvé NP-complet, il n'y a donc pas dans ce modèle de test exact qui indique s'il est nécessaire ou non de faire du spill, et si oui quoi spiller et où. Dans l'algorithme de Chaitin et al., une variable spillée est supprimée dans tout le programme, ce qui est inefficace aux endroits où suffisamment de registres sont encore disponibles. Pour palier ce problème, de nombreux auteurs ont remarqué que l'on peut couper les intervalles de vie des variables grâce à l'insertion d'instructions de copies, ce qui crée des plus petits intervalles et permet de spiller les variables sur des domaines plus réduits. La difficulté est alors de choisir les bons endroits où couper les intervalles. En pratique, on obtient de meilleurs résultats si les intervalles sont coupés en de très nom- breux points [Briggs, 1992; Appel and George, 2001], on attend alors du coalescing qu'il enlève la plupart de ces copies, mais s'il échoue, le bénéfice d'avoir un meilleur spill peut être annulé. C'est pour cette raison que Appel and George [2001] ont créé le " Coalescing Challenge ". Récemment (2004), trois équipes ont découvert que le graphe d'interférence d'un programme sous la forme Static Single Assignment (SSA) sont cordaux. Colorier le graphe devient alors facile avec un schéma d'élimination simpliciel et la communauté se demande si SSA simplifie l'allocation de registres. Nos espoirs étaient que, comme l'était le coloriage, le spilling et le coalescing deviennent plus facilement résolubles puisque nous avons à présent un test de coloriage exact. Notre premier but a alors été de mieux comprendre d'où venait la complexité de l'allocation de registres, et pourquoi le SSA semble simplifier le problème. Nous sommes revenus à la preuve originelle de Chaitin et al. [1981] pour mettre en évidence que la difficulté vient de la présence d'arcs critiques et de la possibilité d'effectuer des permutations de couleurs ou non. Nous avons étudié le problème du spill sous SSA et différentes versions du problème de coalescing : les cas généraux sont NP-complets mais nous avons trouvé un résultat polynomial pour le coalescing incrémental sous SSA. Nous nous en sommes servis pour élaborer de nouvelles heuristiques plus efficaces pour le problème du coalescing, ce qui permet l'utilisation d'un découpage agressif des intervalles de vie. Ceci nous a conduit à recommander un meilleur schéma pour l'allocation de reg- istres. Alors que les tentatives précédentes donnaient des résultats mitigés, notre coa- lescing amélioré permet de séparer proprement l'allocation de registres en deux phases indépendantes : premièrement, spiller pour réduire la pression registre, en coupant po- tentiellement de nombreuses fois ; deuxièmement, colorier les variables et appliquer le coalescing pour supprimer le plus de copies possible. Ce schéma devrait être très efficace dans un compilateur de type agressif, cepen- dant, le grand nombre de coupes et l'augmentation du temps de compilation nécessaire pour l'exécution du coalescing sont prohibitifs à l'utilisation dans un cadre de com- pilation just-in-time (JIT). Nous avons donc créé une nouvelle heuristique appelée " déplacement de permutation ", faite pour être utilisée avec un découpage selon SSA, qui puisse remplacer notre coalescing dans ce contexte.
29

The Effects of Private Investment on Growth in Sub Saharan African Between 1990-2008

Komeh, Tamba Fillie January 2012 (has links)
No description available.
30

La représentation SSA: sémantique, analyses et implémentation dans GCC

Pop, Sebastian 12 1900 (has links) (PDF)
Le langage d'assignation statique unique, SSA, est l'une des représentations intermédiaires les plus communément utilisées dans les compilateurs industriels. Cependant l'intérêt de la communauté d'analyse statique de programmes est minime, un fait dû aux faibles fondations formelles du langage SSA. Cette thèse présente une sémantique dénotationelle du langage SSA, permettant des définitions formelles des analyses statiques du langage SSA en se basant sur les méthodes classiques de l'interprétation abstraite. D'un point de vue pratique, cette thèse présente l'implémentation des analyseurs statiques définis formellement dans un compilateur industriel, la Collection de Compilateurs GNU, GCC.

Page generated in 0.0261 seconds