• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 21
  • 19
  • 9
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 378
  • 219
  • 189
  • 146
  • 136
  • 127
  • 115
  • 93
  • 91
  • 73
  • 71
  • 61
  • 56
  • 55
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Použití neuronových sítí pro generování realistických obrazů oblohy / Using neural networks to generate realistic skies

Hojdar, Štěpán January 2019 (has links)
Environment maps are widely used in several computer graphics fields, such as realistic architectural rendering or computer games as sources of the light in the scene. Obtaining these maps is not easy, since they have to have both a high- dynamic range as well as a high resolution. As a result, they are expensive to make and the supply is limited. Deep neural networks are a widely unexplored research area and have been successfully used for generating complex and realistic images like human portraits. Neural networks perform well at predicting data from complex models, which are easily observable, such as photos of the real world. This thesis explores the idea of generating physically plausible environment maps by utilizing deep neural networks known as generative adversarial networks. Since a skydome dataset is not publicly available, we develop a scalable capture process with both low-end and high-end hardware. We implement a pipeline to process the captured data before feeding it to a network and extend an already existing network architecture to generate HDR environment maps. We then run a series of experiments to determine the quality of the results and uncover the directions of possible further research.
82

Discriminant Profile of Dimensions of Acquired Disability on Domains of Posttraumatic Growth

Portis, Linda Denise 01 January 2018 (has links)
The transformative process of personal growth following suffering and challenges, or posttraumatic growth (PTG), is limited in persons with acquired disability. The dimensions of acquired disability, as outlined by the World Health Organization, include impairments in body functions, body structures, and growth restrictions in activities and participation. The 5 domains of PTG include personal strength, new possibilities, relating to other people, appreciation of life, and spiritual change. Using discriminant function analysis, the purpose of this quantitative study was to identify a discriminant analysis of the dimensions of acquired disability on the domains of posttraumatic growth. The first research question focused on investigating the number of statistically significant uncorrelated linear combinations. The second research question reviewed the multivariate profile (or profiles if there is more than one statistically significant function) of the Posttraumatic Growth Inventory domains that discriminant the dimensions of acquired disability. A cross-sectional survey design was used to gather data from 161 individuals with acquired disability who were over 18 years of age and were at least 1 year postdiagnosis. Participants were invited to participate using a Facebook page and targeted advertising, as well as personal invitations to online support groups advocating for persons with acquired disability. This study and analysis only found 1 significant pairwise connection between impairment in body structure and growth, activity, and participation with the PTG domain of personal strength. Results may be used to guide the planning and implementation of aftercare programs for individuals diagnosed with an acquired disability to help promote PTG.
83

O papel do juiz na efetivação dos valores constitucionais no processo / The role of the judge in enforcing the constitutional values in the process.

Francisco, João Eberhardt 06 June 2014 (has links)
O presente trabalho se propõe a investigar se a mudança da conformação da legislação, disposta em enunciados normativos que contém termos imprecisos, conceitos tipológicos, e a consequente exigência de tarefa hermenêutica diversa e mais intensa do que a que era procedida sob o modelo anterior, modifica o modo de ser do processo. Para tanto, analisa-se como a aceitação da eficácia plena das normas constitucionais afeta a função jurisdicional, impondo ao julgador a tarefa de continuamente verificar a adequação da norma aplicável à resolução de uma dada controvérsia ao modelo constitucional. Considerando-se que essa tarefa confere poder aumentado ao juiz, discute-se como sua autoridade está limitada pelo devido processo legal e conclui-se ser seu dever a efetivação no processo dos valores constitucionais inseridos nesse conceito, conferindo meios e oportunidades para que as partes exerçam amplamente seu direito de participação e influência no resultado que lhes afetará. / The present work aims to investigate if the legislative conformation, arranged in normative statements containing imprecise terms, typological concepts, and the consequent hermenêutical task, distinct and more intense than it was preceded in the previous model, modifies the way of the process. For this, we look at how the acceptance of full effectiveness of constitutional norms affect judicial function, imposing the judge the task of continually checking the adequacy of the applicable rule to solve a given dispute resolution to the constitucional model. Whereas this task gives increased power to the judge, it is debated how his authority is bounded by the due process of law, and it is infered that it is in his duty the full achievement of constitutional values in the process, providing means and opportunities to participate and to influence the result that will affect them.
84

APPRENTISSAGE SÉQUENTIEL : Bandits, Statistique et Renforcement.

Maillard, Odalric-Ambrym 03 October 2011 (has links) (PDF)
Cette thèse traite des domaines suivant en Apprentissage Automatique: la théorie des Bandits, l'Apprentissage statistique et l'Apprentissage par renforcement. Son fil rouge est l'étude de plusieurs notions d'adaptation, d'un point de vue non asymptotique : à un environnement ou à un adversaire dans la partie I, à la structure d'un signal dans la partie II, à la structure de récompenses ou à un modèle des états du monde dans la partie III. Tout d'abord nous dérivons une analyse non asymptotique d'un algorithme de bandit à plusieurs bras utilisant la divergence de Kullback-Leibler. Celle-ci permet d'atteindre, dans le cas de distributions à support fini, la borne inférieure de performance asymptotique dépendante des distributions de probabilité connue pour ce problème. Puis, pour un bandit avec un adversaire possiblement adaptatif, nous introduisons des modèles dépendants de l'histoire et traduisant une possible faiblesse de l'adversaire et montrons comment en tirer parti pour concevoir des algorithmes adaptatifs à cette faiblesse. Nous contribuons au problème de la régression en montrant l'utilité des projections aléatoires, à la fois sur le plan théorique et pratique, lorsque l'espace d'hypothèses considéré est de dimension grande, voire infinie. Nous utilisons également des opérateurs d'échantillonnage aléatoires dans le cadre de la reconstruction parcimonieuse lorsque la base est loin d'être orthogonale. Enfin, nous combinons la partie I et II : pour fournir une analyse non-asymptotique d'algorithmes d'apprentissage par renforcement; puis, en amont du cadre des Processus Décisionnel de Markov, pour discuter du problème pratique du choix d'un bon modèle d'états.
85

Matoucí vzory ve strojovém učení / Adversarial Examples in Machine Learning

Kocián, Matěj January 2018 (has links)
Deep neural networks have been recently achieving high accuracy on many important tasks, most notably image classification. However, these models are not robust to slightly perturbed inputs known as adversarial examples. These can severely decrease the accuracy and thus endanger systems where such machine learning models are employed. We present a review of adversarial examples literature. Then we propose new defenses against adversarial examples: a network combining RBF units with convolution, which we evaluate on MNIST and get better accuracy than with an adversarially trained CNN, and input space discretization, which we evaluate on MNIST and ImageNet and obtain promising results. Finally, we explore a way of generating adversarial perturbation without access to the input to be perturbed. 1
86

Object Detection using deep learning and synthetic data

Lidberg, Love January 2018 (has links)
This thesis investigates how synthetic data can be utilized when training convolutional neural networks to detect flags with threatening symbols. The synthetic data used in this thesis consisted of rendered 3D flags with different textures and flags cut out from real images. The synthetic data showed that it can achieve an accuracy above 80% compared to 88% accuracy achieved by a data set containing only real images. The highest accuracy scored was achieved by combining real and synthetic data showing that synthetic data can be used as a complement to real data. Some attempts to improve the accuracy score was made using generative adversarial networks without achieving any encouraging results.
87

Robust Large Margin Approaches for Machine Learning in Adversarial Settings

Torkamani, MohamadAli 21 November 2016 (has links)
Machine learning algorithms are invented to learn from data and to use data to perform predictions and analyses. Many agencies are now using machine learning algorithms to present services and to perform tasks that used to be done by humans. These services and tasks include making high-stake decisions. Determining the right decision strongly relies on the correctness of the input data. This fact provides a tempting incentive for criminals to try to deceive machine learning algorithms by manipulating the data that is fed to the algorithms. And yet, traditional machine learning algorithms are not designed to be safe when confronting unexpected inputs. In this dissertation, we address the problem of adversarial machine learning; i.e., our goal is to build safe machine learning algorithms that are robust in the presence of noisy or adversarially manipulated data. Many complex questions -- to which a machine learning system must respond -- have complex answers. Such outputs of the machine learning algorithm can have some internal structure, with exponentially many possible values. Adversarial machine learning will be more challenging when the output that we want to predict has a complex structure itself. In this dissertation, a significant focus is on adversarial machine learning for predicting structured outputs. In this thesis, first, we develop a new algorithm that reliably performs collective classification: It jointly assigns labels to the nodes of graphed data. It is robust to malicious changes that an adversary can make in the properties of the different nodes of the graph. The learning method is highly efficient and is formulated as a convex quadratic program. Empirical evaluations confirm that this technique not only secures the prediction algorithm in the presence of an adversary, but it also generalizes to future inputs better, even if there is no adversary. While our robust collective classification method is efficient, it is not applicable to generic structured prediction problems. Next, we investigate the problem of parameter learning for robust, structured prediction models. This method constructs regularization functions based on the limitations of the adversary in altering the feature space of the structured prediction algorithm. The proposed regularization techniques secure the algorithm against adversarial data changes, with little additional computational cost. In this dissertation, we prove that robustness to adversarial manipulation of data is equivalent to some regularization for large-margin structured prediction, and vice versa. This confirms some of the previous results for simpler problems. As a matter of fact, an ordinary adversary regularly either does not have enough computational power to design the ultimate optimal attack, or it does not have sufficient information about the learner's model to do so. Therefore, it often tries to apply many random changes to the input in a hope of making a breakthrough. This fact implies that if we minimize the expected loss function under adversarial noise, we will obtain robustness against mediocre adversaries. Dropout training resembles such a noise injection scenario. Dropout training was initially proposed as a regularization technique for neural networks. The procedure is simple: At each iteration of training, randomly selected features are set to zero. We derive a regularization method for large-margin parameter learning based on dropout. Our method calculates the expected loss function under all possible dropout values. This method results in a simple objective function that is efficient to optimize. We extend dropout regularization to non-linear kernels in several different directions. We define the concept of dropout for input space, feature space, and input dimensions, and we introduce methods for approximate marginalization over feature space, even if the feature space is infinite-dimensional. Empirical evaluations show that our techniques consistently outperform the baselines on different datasets.
88

Data-Driven and Game-Theoretic Approaches for Privacy

January 2018 (has links)
abstract: In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers (businesses) to collect a large amount of data. However, this ``data collection" process can put the privacy of users at risk and also lead to user reluctance in accepting services or sharing data. This dissertation first investigates privacy sensitive consumer-retailers/service providers interactions under different scenarios, and then focuses on a unified framework for various information-theoretic privacy and privacy mechanisms that can be learned directly from data. Existing approaches such as differential privacy or information-theoretic privacy try to quantify privacy risk but do not capture the subjective experience and heterogeneous expression of privacy-sensitivity. The first part of this dissertation introduces models to study consumer-retailer interaction problems and to better understand how retailers/service providers can balance their revenue objectives while being sensitive to user privacy concerns. This dissertation considers the following three scenarios: (i) the consumer-retailer interaction via personalized advertisements; (ii) incentive mechanisms that electrical utility providers need to offer for privacy sensitive consumers with alternative energy sources; (iii) the market viability of offering privacy guaranteed free online services. We use game-theoretic models to capture the behaviors of both consumers and retailers, and provide insights for retailers to maximize their profits when interacting with privacy sensitive consumers. Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. In the second part, a novel context-aware privacy framework called generative adversarial privacy (GAP) is introduced. Inspired by recent advancements in generative adversarial networks, GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. For appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. Both synthetic and real-world datasets are used to show that GAP can greatly reduce the adversary's capability of inferring private information at a small cost of distorting the data. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
89

As provas não repetíveis no processo penal brasileiro / The non-repeatable evidence in criminal process

Camilla Brentel 15 June 2012 (has links)
O Código de Processo Penal brasileiro foi alterado em 2008 em decorrência da promulgação de algumas Leis Ordinárias. Uma delas (nº 11.690) prescreveu a modificação do artigo 155, a fim de regulamentar a aceitação de provas não repetíveis (e outras produzidas durante as investigações) para o convencimento do julgador. No entanto, como o legislador não atribuiu significado às provas não repetíveis, tampouco teceu esclarecimentos a respeito do modo como tais provas seriam compatibilizadas com o princípio constitucional do contraditório, há muitas incertezas sobre a disposição, que tem sido objeto de discussão pela comunidade jurídica. O silêncio do legislador impediu o desenvolvimento de uma regulação eficiente sobre o assunto. Com o objetivo de contribuir para as atuais discussões, propomos uma análise comparativa da doutrina sobre provas não repetíveis utilizada na Itália, país que serviu de inspiração à criação da norma brasileira. Por meio deste estudo, pretendemos: (i) clarificar o conceito de provas não repetíveis; (ii) analisar a interação do conceito de provas não repetíveis com outras provas produzidas durante as investigações; (iii) alcançar a compreensão do tratamento normativo e doutrinário das provas não repetíveis nos processos penais brasileiro e italiano; e (iv) refletir, à luz da das regras estabelecidas na Constituição Brasileira, se a regulamentação italiana sobre as provas não repetíveis teria aplicação no processo penal brasileiro. Depois de realizadas tais aferições, refletiremos sobre a necessidade de reformulação do artigo 155 que, se confirmada, nos levará à porposição de um novo texto normativo. / The Brazilian Criminal Procedure Code was altered in 2008 as a result of the adoption of some Ordinary Laws. One of them (nº. 11.690) prescribed amendments in article 155, which from then on stipulates the acceptance of non-repeatable evidence (as well as other types of evidence produced during investigations), as means of conviction. Nevertheless, as the legislator neither provided a definition of non-repeatable evidence nor instructed how this evidence should be treated in regards to the adversarial system of justice guaranteed by the Brazilian Constitution, there is a lot of uncertainty on the juridical community concerning this provision. The silence of the legislator deterred the development of an efficient regulation on the matter. Aiming to contribute to the current discussions, this work is focused on the comparative analysis of the doctrine of nonrepeatable evidence as applied in Italy, cradle of this idea. This study intends to: (i) clarify the concept of non-repeatable evidence; (ii) scrutinize the interaction of the concept of non-repeatable evidence with the further evidences produced during investigation; (iii) comprehend, in light of the Italian doctrine and the rules set forth in the Brazilian Constitution, the scope of application of the non-repeatable evidence; and (iv) analyze, bearing in mind the rules contained in the Brazilian Constitution, whether the system of non-repeatable evidence prescribed in Italy could also be applied in the Brazilian Criminal Procedure. After all these considerations are made, the crux of this work will be on whether article 155 should be rephrased and, if affirmative, how the new article should be worded.
90

Active Cleaning of Label Noise Using Support Vector Machines

Ekambaram, Rajmadhan 19 June 2017 (has links)
Large scale datasets collected using non-expert labelers are prone to labeling errors. Errors in the given labels or label noise affect the classifier performance, classifier complexity, class proportions, etc. It may be that a relatively small, but important class needs to have all its examples identified. Typical solutions to the label noise problem involve creating classifiers that are robust or tolerant to errors in the labels, or removing the suspected examples using machine learning algorithms. Finding the label noise examples through a manual review process is largely unexplored due to the cost and time factors involved. Nevertheless, we believe it is the only way to create a label noise free dataset. This dissertation proposes a solution exploiting the characteristics of the Support Vector Machine (SVM) classifier and the sparsity of its solution representation to identify uniform random label noise examples in a dataset. Application of this method is illustrated with problems involving two real-world large scale datasets. This dissertation also presents results for datasets that contain adversarial label noise. A simple extension of this method to a semi-supervised learning approach is also presented. The results show that most mislabels are quickly and effectively identified by the approaches developed in this dissertation.

Page generated in 7.6544 seconds