• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 21
  • 21
  • 16
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 172
  • 72
  • 48
  • 33
  • 29
  • 28
  • 26
  • 23
  • 23
  • 22
  • 20
  • 19
  • 18
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Salient object detection and segmentation in videos / Détection d'objets saillants et segmentation dans des vidéos

Wang, Qiong 09 May 2019 (has links)
Cette thèse est centrée sur le problème de la détection d'objets saillants et de leur segmentation dans une vidéo en vue de détecter les objets les plus attractifs ou d'affecter des identités cohérentes d'objets à chaque pixel d'une séquence vidéo. Concernant la détection d'objets saillants dans vidéo, outre une revue des techniques existantes, une nouvelle approche et l'extension d'un modèle sont proposées; de plus une approche est proposée pour la segmentation d'instances d'objets vidéo. Pour la détection d'objets saillants dans une vidéo, nous proposons : (1) une approche traditionnelle pour détecter l'objet saillant dans sa totalité à l'aide de la notion de "bordures virtuelles". Un filtre guidé est appliqué sur la sortie temporelle pour intégrer les informations de bord spatial en vue d'une meilleure détection des bords de l'objet saillants. Une carte globale de saillance spatio-temporelle est obtenue en combinant la carte de saillance spatiale et la carte de saillance temporelle en fonction de l'entropie. (2) Une revue des développements récents des méthodes basées sur l'apprentissage profond est réalisée. Elle inclut les classifications des méthodes de l'état de l'art et de leurs architectures, ainsi qu'une étude expérimentale comparative de leurs performances. (3) Une extension d'un modèle de l'approche traditionnelle proposée en intégrant un procédé de détection d'objet saillant d'image basé sur l'apprentissage profond a permis d'améliorer encore les performances. Pour la segmentation des instances d'objets dans une vidéo, nous proposons une approche d'apprentissage profond dans laquelle le calcul de la confiance de déformation détermine d'abord la confiance de la carte masquée, puis une sélection sémantique est optimisée pour améliorer la carte déformée, où l'objet est réidentifié à l'aide de l'étiquettes sémantique de l'objet cible. Les approches proposées ont été évaluées sur des jeux de données complexes et de grande taille disponibles publiquement et les résultats expérimentaux montrent que les approches proposées sont plus performantes que les méthodes de l'état de l'art. / This thesis focuses on the problem of video salient object detection and video object instance segmentation which aim to detect the most attracting objects or assign consistent object IDs to each pixel in a video sequence. One approach, one overview and one extended model are proposed for video salient object detection, and one approach is proposed for video object instance segmentation. For video salient object detection, we propose: (1) one traditional approach to detect the whole salient object via the adjunction of virtual borders. A guided filter is applied on the temporal output to integrate the spatial edge information for a better detection of the salient object edges. A global spatio-temporal saliency map is obtained by combining the spatial saliency map and the temporal saliency map together according to the entropy. (2) An overview of recent developments for deep-learning based methods is provided. It includes the classifications of the state-of-the-art methods and their frameworks, and the experimental comparison of the performances of the state-of-the-art methods. (3) One extended model further improves the performance of the proposed traditional approach by integrating a deep-learning based image salient object detection method For video object instance segmentation, we propose a deep-learning approach in which the warping confidence computation firstly judges the confidence of the mask warped map, then a semantic selection is introduced to optimize the warped map, where the object is re-identified using the semantics labels of the target object. The proposed approaches have been assessed on the published large-scale and challenging datasets. The experimental results show that the proposed approaches outperform the state-of-the-art methods.
82

Gestion d'un évolution du schéma d'une base de données à objets: une approche par compromis

Benatallah, Boualem 04 March 1996 (has links) (PDF)
Dans cette thèse, nous intéressons au problème de l'évolution des schémas pour les bases de données à objets. Nous considérons d'abord les solutions proposées pour la gestion de l'évolution de schéma de bases de données à objets. Nous proposons une classification des approches existantes. Pour chacune de ces approches nous décrivons son principe, les mécanismes d'évolution associés, ainsi que les produits et les prototypes qui l'implantent. Nous analysons ces travaux en soulignant les avantages et les inconvénients de chaque approche. Nous présentons ensuite notre approche. D'une part, cette approche propose un cadre qui permet de combiner les fonctionnalités de la modification et du versionnement pour une meilleure gestion de l'évolution de schéma. D'autre part, elle offre à l'utilisateur un langage permettant de décrire les liens entre les différents états de la base de données afin de traduire le plus fidèlement possible les évolutions du monde réel. Le versionnement de schéma évite la perte d'informations et assure que les anciens programmes d'applications continuent de fonctionner. Cependant, le nombre de versions peut devenir important ; ce qui rend complexe leur gestion. Notre approche permet de limiter le nombre de versions: (1) l'évolution d'un schéma est traduite par sa modification si l'évolution est non-soustractive (ne provoque pas la suppression de propriétés) ou si l'utilisateur le décide, (2) La technique utilisée pour adapter les instances au schéma après l'évolution, est basée sur la caractérisation de l'importance de l'existence en tant que telle d'une version d'objet. Ainsi, le nombre de versions est limité à celles qui sont fréquemment accédées par des programmes, (3) la possibilité donnée à l'administrateur de réorganiser la base de données lui permet de supprimer des versions historiques du schéma
83

Βελτιστοποίηση ερωτημάτων με πολλαπλά κριτήρια σε βάσεις δεδομένων / Multiobjective query optimization under parametric aggregation constraints

Ρήγα, Γεωργία 24 September 2007 (has links)
Το πρόβλημα της βελτιστοποίησης ερωτημάτων πολλαπλών κριτηρίων σε βάσεις δεδομένων είναι ένα αρκετά δύσκολο και ενδιαφέρον ερευνητικά πρόβλημα, διότι χαρακτηρίζεται από αντικρουόμενες απαιτήσεις. Κάθε βήμα στην απάντηση ενός ερωτήματος μπορεί να εκτελεστεί με παραπάνω από έναν τρόπους. Για την επίλυση τέτοιου είδους ερωτημάτων έχουν προταθεί διάφοροι αλγόριθμοι, με πιο πρόσφατους τους: Mariposa, M' και Generate Partitions. Ο Mariposa και ο Μ' εφαρμόζονται στην βάση δεδομένων Mariposa, η οποία δίνει την δυνατότητα στον χρήστη να καθορίζει την επιθυμητή εξισορόπηση (tradeoff) καθυστέρησης/κόστους για κάθε ερώτημα που θέτει. Ο αλγόριθμος Mariposa ακολουθεί μία προσέγγιση απληστίας (greedy approach) προσπαθώντας σε κάθε βήμα να μεγιστοποιήσει το «κέρδος» ενώ ο Μ' χρησιμοποιεί σύνολα βέτιστων κατά Pareto λύσεων για την επιλογή του επόμενου βήματος στην θέση του κριτηρίου απληστίας. Τέλος, ο αλγόριθμος Generate Partition χρησιμοποιεί έναν διαχωρισμό του χώρου απαντήσεων χρησιμοποιώντας δομές R-trees πετυχαίνοντας πολύ καλή απόδοση. / The optimization of queries in distributed database systems is known to be subject to delicate trade-offs. For example, the Mariposa database system allows users to specify a desired delay-cost tradeoff (that is to supply a decreasing function u(d) specifying how much the user is willing to pay in order to receive the query results within time d) Mariposa divides a query graph into orizontal strides analyzes each stride, and uses a greedy heuristic to find the best plan for all strides.
84

Hierarchical Bayesian Learning Approaches for Different Labeling Cases

Manandhar, Achut January 2015 (has links)
<p>The goal of a machine learning problem is to learn useful patterns from observations so that appropriate inference can be made from new observations as they become available. Based on whether labels are available for training data, a vast majority of the machine learning approaches can be broadly categorized into supervised or unsupervised learning approaches. In the context of supervised learning, when observations are available as labeled feature vectors, the learning process is a well-understood problem. However, for many applications, the standard supervised learning becomes complicated because the labels for observations are unavailable as labeled feature vectors. For example, in a ground penetrating radar (GPR) based landmine detection problem, the alarm locations are only known in 2D coordinates on the earth's surface but unknown for individual target depths. Typically, in order to apply computer vision techniques to the GPR data, it is convenient to represent the GPR data as a 2D image. Since a large portion of the image does not contain useful information pertaining to the target, the image is typically further subdivided into subimages along depth. These subimages at a particular alarm location can be considered as a set of observations, where the label is only available for the entire set but unavailable for individual observations along depth. In the absence of individual observation labels, for the purposes of training standard supervised learning approaches, observations both above and below the target are labeled as targets despite substantial differences in their characteristics. As a result, the label uncertainty with depth would complicate the parameter inference in the standard supervised learning approaches, potentially degrading their performance. In this work, we develop learning algorithms for three such specific scenarios where: (1) labels are only available for sets of independent and identically distributed (i.i.d.) observations, (2) labels are only available for sets of sequential observations, and (3) continuous correlated multiple labels are available for spatio-temporal observations. For each of these scenarios, we propose a modification in a traditional learning approach to improve its predictive accuracy. The first two algorithms are based on a set-based framework called as multiple instance learning (MIL) whereas the third algorithm is based on a structured output-associative regression (SOAR) framework. The MIL approaches are motivated by the landmine detection problem using GPR data, where the training data is typically available as labeled sets of observations or sets of sequences. The SOAR learning approach is instead motivated by the multi-dimensional human emotion label prediction problem using audio-visual data, where the training data is available in the form of multiple continuous correlated labels representing complex human emotions. In both of these applications, the unavailability of the training data as labeled featured vectors motivate developing new learning approaches that are more appropriate to model the data. </p><p>A large majority of the existing MIL approaches require computationally expensive parameter optimization, do not generalize well with time-series data, and are incapable of online learning. To overcome these limitations, for sets of observations, this work develops a nonparametric Bayesian approach to learning in MIL scenarios based on Dirichlet process mixture models. The nonparametric nature of the model and the use of non-informative priors remove the need to perform cross-validation based optimization while variational Bayesian inference allows for rapid parameter learning. The resulting approach is highly generalizable and also capable of online learning. For sets of sequences, this work integrates Hidden Markov models (HMMs) into an MIL framework and develops a new approach called the multiple instance hidden Markov model. The model parameters are inferred using variational Bayes, making the model tractable and computationally efficient. The resulting approach is highly generalizable and also capable of online learning. Similarly, most of the existing approaches developed for modeling multiple continuous correlated emotion labels do not model the spatio-temporal correlation among the emotion labels. Few approaches that do model the correlation fail to predict the multiple emotion labels simultaneously, resulting in latency during testing, and potentially compromising the effectiveness of implementing the approach in real-time scenario. This work integrates the output-associative relevance vector machine (OARVM) approach with the multivariate relevance vector machine (MVRVM) approach to simultaneously predict multiple emotion labels. The resulting approach performs competitively with the existing approaches while reducing the prediction time during testing, and the sparse Bayesian inference allows for rapid parameter learning. Experimental results on several synthetic datasets, benchmark datasets, GPR-based landmine detection datasets, and human emotion recognition datasets show that our proposed approaches perform comparably or better than the existing approaches.</p> / Dissertation
85

SEnsembles – uma abordagem para melhorar a qualidade das correspondências de instâncias disjuntas em estudos observacionais explorando características idênticas e ensembles de regressores

Borges Junior, Sergio Ricardo 16 December 2016 (has links)
Submitted by Ronildo Prado (ronisp@ufscar.br) on 2017-07-19T10:44:04Z No. of bitstreams: 1 TeseSRBJ.pdf: 5473127 bytes, checksum: 6ad9d0f7d24cadafbff18e445b8736d1 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-07-19T10:44:16Z (GMT) No. of bitstreams: 1 TeseSRBJ.pdf: 5473127 bytes, checksum: 6ad9d0f7d24cadafbff18e445b8736d1 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-07-19T10:44:27Z (GMT) No. of bitstreams: 1 TeseSRBJ.pdf: 5473127 bytes, checksum: 6ad9d0f7d24cadafbff18e445b8736d1 (MD5) / Made available in DSpace on 2017-07-19T10:45:12Z (GMT). No. of bitstreams: 1 TeseSRBJ.pdf: 5473127 bytes, checksum: 6ad9d0f7d24cadafbff18e445b8736d1 (MD5) Previous issue date: 2016-12-16 / Não recebi financiamento / Introduction. The datasets used in observational studies have instances belonging to two distinct groups (i.e. treatment group and control group), which are compared in order to estimate the effect of the treatment over the results. For such, in one of the approaches, called Propensity Score Matching (PSM), the propensity score for the instances of both groups is estimated and, subsequently, the correspondence of these instances is performed based on the values for the propensity score. The propensity score is the probability of attribution of a treatment based on the observed characteristics (e.g. income, sex and age). In this context, the logistic regression is widely used to estimate the propensity score and there is an great variety of instance correspondence methods. Objective. This doctor´s thesis has as its main objective to investigate computational alternatives in order to improve the quality of the instance correspondence in datasets that are manipulated in observational studies. Methodology. Techniques that estimate the propensity score and methods to perform the instance correspondence in observational studies were investigated. Thus, it was possible to investigate how the identical characteristics of the instances could be exploited in a new process to perform correspondence and, how ensembles could substitute the logistic regression by estimating the propensity scores of the instances, in the context of the PSM process. Proposal. This thesis proposes a new approach in the context of the PSM process, called “SEnsembles”, which aims to improve the quality of instance correspondence based on two main processes, which use techniques that separately consider the identical characteristics of the instances and the ensembles of regressors, more precisely, bagging, random forest and boosting. Results. The proposed approach “SEnsembles” improves the quality of the instance correspondence for the majority of calipers used (i.e. zero, 0.05, 0.10, 0.15, 0.20, 0.25 and 0.30) when compared to the baseline Nearest Neighbor Matching (NNM). Based on the experiments, when there was an improvement over the baseline, the technique that separates the identical characteristics of the instances presented improvements of up to 53.8% in the quality of correspondence, with an average of gains of 12.1%; and only 2.7% of average in the reduction of the number of pairs of instances matched. The technique which substituted the logistic regression for ensembles of regressors, in turn, presented the best correspondence with the caliper zero and with the values 0.20, 0.25 and 0.30, with improvements of up to 36.3% and an average of gains of 12.7%; and a slightly reduction of 7.6% in the number of pairs of instances matched. / Introdução. Os conjuntos de dados manipulados em estudos observacionais possuem instâncias pertencentes a dois grupos distintos (i.e. grupo de tratamento e grupo de controle), as quais são comparadas para estimar o efeito do tratamento sobre os resultados. Para isso, em uma das abordagens, chamada de Propensity Score Matching (PSM), estima-se o escore de propensão para as instâncias de ambos os grupos e, em seguida, efetua-se a correspondência dessas instâncias com base nos valores dos escores de propensão. O escore de propensão é a probabilidade de atribuição de um tratamento com base nas características observadas (por exemplo, renda, sexo e idade). Neste contexto, a regressão logística é amplamente utilizada para estimar o escore de propensão e há uma ampla variedade de métodos de correspondência de instâncias. Objetivo. Esta pesquisa de doutorado tem como objetivo principal investigar alternativas computacionais para melhorar a qualidade das correspondências de instâncias em conjuntos de dados que são manipulados em estudos observacionais. Metodologia. Investigou-se técnicas que estimam o escore de propensão e métodos para se efetuar a correspondência das instâncias em estudos observacionais. Assim, foi possível investigar como as características idênticas das instâncias poderiam ser exploradas em um novo processo de correspondência e, como ensembles, mais precisamente, bagging, random forest e boosting, poderiam substituir a regressão logística ao estimar os escores de propensão das instâncias, no contexto do processo de PSM. Proposta. Esta pesquisa propõe uma nova abordagem no contexto do processo PSM, denominada “SEnsembles”, que visa melhorar a qualidade da correspondência das instâncias com base em 2 processos principais, os quais utilizam técnicas que considerem em separado as características idênticas das instâncias e os ensembles de regressores, mais precisamente, bagging, random forest e boosting. Resultados. A abordagem proposta “SEnsembles” melhorou a qualidade da correspondência de instâncias para a maioria dos calipers utilizado (zero, 0,05, 0,10, 0,15, 0,20, 0,25 e 0,30) quando comparada ao baseline Nearest Neighbor Matching (NNM). Com base nos experimentos, quando houve ganho, a técnica que separa as características idênticas das instâncias proporcionou ganhos de até 53,8% na qualidade da correspondência, com média de 12,1% de melhoria e 2,7% de redução média do número de pares de instâncias correspondidas. Já a técnica que substituiu a regressão logística pelos ensembles proporcionou as melhores correspondências com o caliper zero e com os valores 0,20, 0,25 e 0,30, com ganhos de até 36,3% e, com média de 12,7% de melhoria e 7,6% de redução do número de pares de instâncias correspondidas.
86

Auditoria e monitoramento de eventos inconsistentes em instâncias de máquinas virtuais em IaaS no Orquestrador Apache CloudStack / Auditing and monitoring of inconsistent events in virtual machine instances in IaaS in the Apache CloudStack Orchestrator

Pauro, Leandro Luis [UNESP] 06 December 2016 (has links)
Submitted by LEANDRO LUIS PAURO null (leapauro@hotmail.com) on 2017-01-04T17:24:14Z No. of bitstreams: 1 DissertaçãoLeandroLuisPauro.pdf: 2663695 bytes, checksum: 2f960384d56a02d82cba4527cfb3d32c (MD5) / Approved for entry into archive by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br) on 2017-01-06T13:52:34Z (GMT) No. of bitstreams: 1 pauro_ll_me_sjrp.pdf: 2663695 bytes, checksum: 2f960384d56a02d82cba4527cfb3d32c (MD5) / Made available in DSpace on 2017-01-06T13:52:34Z (GMT). No. of bitstreams: 1 pauro_ll_me_sjrp.pdf: 2663695 bytes, checksum: 2f960384d56a02d82cba4527cfb3d32c (MD5) Previous issue date: 2016-12-06 / Cada vez mais a Computação em Nuvem é incorporada pelas empresas como forma econômica e viável de se disponibilizar recursos e serviços. No entanto, a confiabilidade operacional e a disponibilidade de recurso ainda causam preocupação em virtude de ocorrer a inatividade de algum serviço fornecido pela nuvem, o que pode gerar a perda de receitas e desconfiança do cliente. Assim, é crucial que se disponibilize ferramentas a este ambiente para realizar auditoria e monitoramento, a fim de prover a prevenção e a eliminação de inconsistências que possam provocar a indisponibilidade do serviço oferecido. Este trabalho apresenta a ferramenta de Auditoria e Monitoramento em Nuvem Orquestrador Apache CloudStack AMFC, que através do sincronismo das informações do estado atual com dados persistentes do orquestrador, realiza a eliminação de dados sem utilização e inconsistências, diminui o alertas de falso positivo e falso negativo e também proporciona menor custo para armazenamento de dados persistentes da nuvem. Sua eficácia foi evidenciada através da realização de validação manual comparada com o resultado obtido da execução da ferramenta a partir de casos de uso gerados no ambiente de teste controlado. Os resultados obtidos após a realização de 1.320 rotinas administrativas para instância de máquina virtual mostraram a identificação e eliminação das inconsistências na base de dados persistente, a redução do custo de armazenamento e consequentemente, uma base de dados íntegra, que oferece ao administrador da nuvem uma tomada de decisão com maior precisão para averiguar um problema que esteja ocorrendo no ambiente. / Cloud Computing has been increasingly incorporated by companies as an economic and feasible mean to provide resources and services. However, operational reliability and resource availability are still cause for concern since there's the possibility of a cloud service going down, which can lead to loss of revenue and customer distrust. Thus, it is crucial to provide tools for performing auditing and monitoring in order to prevent and eliminate inconsistencies that may cause the unavailability of the service offered. This paper presents the Cloud Orchestrator Auditing and Monitoring Tool Apache CloudStack AMFC, which by synching information current status with the orchestrator persistent data deletes any unused data and inconsistencies, decreases the false positive and negative alerts and also provides lower cost for cloud persistent data storage. Its effectiveness has been demonstrated through manual validation compared to results obtained from running the tool in a controlled test environment. The results obtained after performing 1.320 administrative tasks for a virtual machine were the identification and elimination of inconsistencies in the persistent database, reducing storage costs and, consequently, resulting in an intact database. This enables the cloud administrator to make more accurate decisions when investigating a possible malfunction in the environment.
87

Uma abordagem visual para apoio ao aprendizado multi-instâncias / A visual approach for support to multi-instances learning

Sonia Castelo Quispe 14 August 2015 (has links)
Aprendizado múltipla instância (MIL) é um paradigma de aprendizado de máquina que tem o objetivo de classificar um conjunto (bags) de objetos (instâncias), atribuindo rótulos só para os bags. Em MIL apenas os rótulos dos bags estão disponíveis para treinamento, enquanto os rótulos das instâncias são desconhecidos. Este problema é frequentemente abordado através da seleção de uma instância para representar cada bag, transformando um problema MIL em um problema de aprendizado supervisionado padrão. No entanto, não se conhecem abordagens que apoiem o usuário na realização desse processo. Neste trabalho, propomos uma visualização baseada em árvore multi-escala chamada MILTree que ajuda os usuários na realização de tarefas relacionadas com MIL, e também dois novos métodos de seleção de instâncias, chamados MILTree-SI e MILTree-Med, para melhorar os modelos MIL. MILTree é um layout de árvore de dois níveis, sendo que o primeiro projeta os bags, e o segundo nível projeta as instâncias pertencentes a cada bag, permitindo que o usuário explore e analise os dados multi-instância de uma forma intuitiva. Já os métodos de seleção de instãncias objetivam definir uma instância protótipo para cada bag, etapa crucial para a obtenção de uma alta precisão na classificação de dados multi-instância. Ambos os métodos utilizam o layout MILTree para atualizar visualmente as instâncias protótipo, e são capazes de lidar com conjuntos de dados binários e multi-classe. Para realizar a classificação dos bags, usamos um classificador SVM (Support Vector Machine). Além disso, com o apoio do layout MILTree também pode-se atualizar os modelos de classificação, alterando o conjunto de treinamento, a fim de obter uma melhor classificação. Os resultados experimentais validam a eficácia da nossa abordagem, mostrando que a mineração visual através da MILTree pode ajudar os usuários em cenários de classificação multi-instância. / Multiple-instance learning (MIL) is a paradigm of machine learning that aims at classifying a set (bags) of objects (instances), assigning labels only to the bags. In MIL, only the labels of bags are available for training while the labels of instances in bags are unknown. This problem is often addressed by selecting an instance to represent each bag, transforming a MIL problem into a standard supervised learning. However, there is no user support to assess this process. In this work, we propose a multi-scale tree-based visualization called MILTree that supports users in tasks related to MIL, and also two new instance selection methods called MILTree-SI and MILTree-Med to improve MIL models. MILTree is a two-level tree layout, where the first level projects bags, and the second level projects the instances belonging to each bag, allowing the user to understand the data multi-instance in an intuitive way. The developed selection methods define instance prototypes of each bag, which is important to achieve high accuracy in multi-instance classification. Both methods use the MILTree layout to visually update instance prototypes and can handle binary and multiple-class datasets. In order to classify the bags we use a SVM classifier. Moreover, with support of MILTree layout one can also update the classification model by changing the training set in order to obtain a better classifier. Experimental results validate the effectiveness of our approach, showing that visual mining by MILTree can help the users in MIL classification scenarios.
88

As manifestações de junho de 2013 no Jornal Nacional: uma pesquisa em torno da instância da imagem ao vivo / -

Karina Leal Yamamoto 25 October 2016 (has links)
A sequência de protestos em junho de 2013 sacudiu o cenário político brasileiro como um terremoto - foram manifestações que começavam e terminavam nas telas. Além de matéria-prima para imagens, as passeatas também como imagens sociais (dependem do olhar social) e itinerantes (para adquirir valor e significado), fundando uma nova visibilidade. Para apreender a dimensão desses eventos na instância da imagem ao vivo, foi analisada a edição especial do Jornal Nacional do dia 20 de junho daquele ano. Nos resultados, obtidos por meio da técnica da análise de conteúdo, é notório o adestramento do olhar que, insensível, olhou a multidão de uma perspectiva superior. / The series of protests in June 2013 has changed Brazilian political scenery in an abrupt way as an earthquake - the demonstrations began and ended on the television screens. More than base for images, the riots constitute social images (which depend on the social point of view) and are itinerant (to form value and meaning). To comprehend the dimension of these events in the context of instance of the live image, this work studies, by content analyses approach, the special edition of Jornal Nacional in the 20th June 2013. The results show that the point of view of the imagetic speech is cold, unsympathetic and arrogant towards the protesters.
89

Detecção de módulos de software propensos a falhas através de técnicas de aprendizagem de máquina

BEZERRA, Miguel Eugênio Ramalho 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:52:09Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / O sucesso de um software depende diretamente de sua qualidade. Tradicionalmente, métodos formais e de inspeção manual de código são usados para assegurá-la. Tais métodos, geralmente, possuem um custo elevado e demandam bastante tempo. Dessa forma, as atividades de teste devem ser planejadas cuidadosamente para evitar o desperdício de recursos. Atualmente, as organizações estão buscando maneiras rápidas e baratas de detectar defeitos em softwares. Porém, mesmo com todos os avanços dos últimos anos, o desenvolvimento de software ainda é uma atividade que depende intensivamente do esforço e do conhecimento humano. Muitos pesquisadores e organizações estão interessados em criar um mecanismo capaz de prever automaticamente defeitos em softwares. Nos últimos anos, técnicas de aprendizagem de máquina vêm sendo utilizadas em diversas pesquisas com esse objetivo. Este trabalho investiga e apresenta um estudo da viabilidade da aplicação de métodos de aprendizagem de máquina na detecção de módulos de software propensos a falhas. Classificadores como redes neurais artificiais e técnicas de aprendizagem baseada em instâncias (instance-based learning) serão usadas nessa tarefa, tendo como fonte de informação as métricas de software retiradas do repositório do Metrics Data Program (MDP) da NASA. Também será apresentado um conjunto de melhorias, propostas durante este trabalho, para alguns desses classificadores. Como a detecção de módulos defeituosos é um problema sensível a custo, este trabalho também propõe um mecanismo capaz de medir analiticamente o custo de cada decisão tomada pelos classificadores
90

Věřitel poslední instance / Lender of Last Resort

Varvařovský, Petr January 2017 (has links)
The topic of the final thesis is The Lender of Last Resort. Author of the final thesis has dealt with the issue through the analysis of the European current legislation, available Czech or foreign literature or other relevant sources. The function of the national banks, or other institutions, as a lender of last resort is very complex the final thesis examines from the legal and economical perspective. This matter even has obvious global societal dimension when the adequate performance of the lender of last resort has positive effect on the prosperity of the society. On the other hand when the performance is defective the opposite effect arises. The final thesis is divided into five chapters. First two chapters present and clarify the term of lender of last resort and provide us definitions. Second chapter, which builds upon the first two, is providing the reader with the historical context of the lender of last resort, whose development started on the British Islands at the end of the 18th century. The fourth chapter of the final thesis is dedicated to the criteria for granting financial aid by the lender of last resort and the means of the provision of the financial aid. The author has especially focused on the danger of the systemic risk and the too-big-to-fail doctrine. Last fifth chapter...

Page generated in 0.1134 seconds