• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 229
  • 20
  • 19
  • 9
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 367
  • 210
  • 182
  • 139
  • 132
  • 121
  • 111
  • 90
  • 87
  • 70
  • 67
  • 57
  • 55
  • 54
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

O papel do juiz na efetivação dos valores constitucionais no processo / The role of the judge in enforcing the constitutional values in the process.

Francisco, João Eberhardt 06 June 2014 (has links)
O presente trabalho se propõe a investigar se a mudança da conformação da legislação, disposta em enunciados normativos que contém termos imprecisos, conceitos tipológicos, e a consequente exigência de tarefa hermenêutica diversa e mais intensa do que a que era procedida sob o modelo anterior, modifica o modo de ser do processo. Para tanto, analisa-se como a aceitação da eficácia plena das normas constitucionais afeta a função jurisdicional, impondo ao julgador a tarefa de continuamente verificar a adequação da norma aplicável à resolução de uma dada controvérsia ao modelo constitucional. Considerando-se que essa tarefa confere poder aumentado ao juiz, discute-se como sua autoridade está limitada pelo devido processo legal e conclui-se ser seu dever a efetivação no processo dos valores constitucionais inseridos nesse conceito, conferindo meios e oportunidades para que as partes exerçam amplamente seu direito de participação e influência no resultado que lhes afetará. / The present work aims to investigate if the legislative conformation, arranged in normative statements containing imprecise terms, typological concepts, and the consequent hermenêutical task, distinct and more intense than it was preceded in the previous model, modifies the way of the process. For this, we look at how the acceptance of full effectiveness of constitutional norms affect judicial function, imposing the judge the task of continually checking the adequacy of the applicable rule to solve a given dispute resolution to the constitucional model. Whereas this task gives increased power to the judge, it is debated how his authority is bounded by the due process of law, and it is infered that it is in his duty the full achievement of constitutional values in the process, providing means and opportunities to participate and to influence the result that will affect them.
82

APPRENTISSAGE SÉQUENTIEL : Bandits, Statistique et Renforcement.

Maillard, Odalric-Ambrym 03 October 2011 (has links) (PDF)
Cette thèse traite des domaines suivant en Apprentissage Automatique: la théorie des Bandits, l'Apprentissage statistique et l'Apprentissage par renforcement. Son fil rouge est l'étude de plusieurs notions d'adaptation, d'un point de vue non asymptotique : à un environnement ou à un adversaire dans la partie I, à la structure d'un signal dans la partie II, à la structure de récompenses ou à un modèle des états du monde dans la partie III. Tout d'abord nous dérivons une analyse non asymptotique d'un algorithme de bandit à plusieurs bras utilisant la divergence de Kullback-Leibler. Celle-ci permet d'atteindre, dans le cas de distributions à support fini, la borne inférieure de performance asymptotique dépendante des distributions de probabilité connue pour ce problème. Puis, pour un bandit avec un adversaire possiblement adaptatif, nous introduisons des modèles dépendants de l'histoire et traduisant une possible faiblesse de l'adversaire et montrons comment en tirer parti pour concevoir des algorithmes adaptatifs à cette faiblesse. Nous contribuons au problème de la régression en montrant l'utilité des projections aléatoires, à la fois sur le plan théorique et pratique, lorsque l'espace d'hypothèses considéré est de dimension grande, voire infinie. Nous utilisons également des opérateurs d'échantillonnage aléatoires dans le cadre de la reconstruction parcimonieuse lorsque la base est loin d'être orthogonale. Enfin, nous combinons la partie I et II : pour fournir une analyse non-asymptotique d'algorithmes d'apprentissage par renforcement; puis, en amont du cadre des Processus Décisionnel de Markov, pour discuter du problème pratique du choix d'un bon modèle d'états.
83

Matoucí vzory ve strojovém učení / Adversarial Examples in Machine Learning

Kocián, Matěj January 2018 (has links)
Deep neural networks have been recently achieving high accuracy on many important tasks, most notably image classification. However, these models are not robust to slightly perturbed inputs known as adversarial examples. These can severely decrease the accuracy and thus endanger systems where such machine learning models are employed. We present a review of adversarial examples literature. Then we propose new defenses against adversarial examples: a network combining RBF units with convolution, which we evaluate on MNIST and get better accuracy than with an adversarially trained CNN, and input space discretization, which we evaluate on MNIST and ImageNet and obtain promising results. Finally, we explore a way of generating adversarial perturbation without access to the input to be perturbed. 1
84

Object Detection using deep learning and synthetic data

Lidberg, Love January 2018 (has links)
This thesis investigates how synthetic data can be utilized when training convolutional neural networks to detect flags with threatening symbols. The synthetic data used in this thesis consisted of rendered 3D flags with different textures and flags cut out from real images. The synthetic data showed that it can achieve an accuracy above 80% compared to 88% accuracy achieved by a data set containing only real images. The highest accuracy scored was achieved by combining real and synthetic data showing that synthetic data can be used as a complement to real data. Some attempts to improve the accuracy score was made using generative adversarial networks without achieving any encouraging results.
85

Robust Large Margin Approaches for Machine Learning in Adversarial Settings

Torkamani, MohamadAli 21 November 2016 (has links)
Machine learning algorithms are invented to learn from data and to use data to perform predictions and analyses. Many agencies are now using machine learning algorithms to present services and to perform tasks that used to be done by humans. These services and tasks include making high-stake decisions. Determining the right decision strongly relies on the correctness of the input data. This fact provides a tempting incentive for criminals to try to deceive machine learning algorithms by manipulating the data that is fed to the algorithms. And yet, traditional machine learning algorithms are not designed to be safe when confronting unexpected inputs. In this dissertation, we address the problem of adversarial machine learning; i.e., our goal is to build safe machine learning algorithms that are robust in the presence of noisy or adversarially manipulated data. Many complex questions -- to which a machine learning system must respond -- have complex answers. Such outputs of the machine learning algorithm can have some internal structure, with exponentially many possible values. Adversarial machine learning will be more challenging when the output that we want to predict has a complex structure itself. In this dissertation, a significant focus is on adversarial machine learning for predicting structured outputs. In this thesis, first, we develop a new algorithm that reliably performs collective classification: It jointly assigns labels to the nodes of graphed data. It is robust to malicious changes that an adversary can make in the properties of the different nodes of the graph. The learning method is highly efficient and is formulated as a convex quadratic program. Empirical evaluations confirm that this technique not only secures the prediction algorithm in the presence of an adversary, but it also generalizes to future inputs better, even if there is no adversary. While our robust collective classification method is efficient, it is not applicable to generic structured prediction problems. Next, we investigate the problem of parameter learning for robust, structured prediction models. This method constructs regularization functions based on the limitations of the adversary in altering the feature space of the structured prediction algorithm. The proposed regularization techniques secure the algorithm against adversarial data changes, with little additional computational cost. In this dissertation, we prove that robustness to adversarial manipulation of data is equivalent to some regularization for large-margin structured prediction, and vice versa. This confirms some of the previous results for simpler problems. As a matter of fact, an ordinary adversary regularly either does not have enough computational power to design the ultimate optimal attack, or it does not have sufficient information about the learner's model to do so. Therefore, it often tries to apply many random changes to the input in a hope of making a breakthrough. This fact implies that if we minimize the expected loss function under adversarial noise, we will obtain robustness against mediocre adversaries. Dropout training resembles such a noise injection scenario. Dropout training was initially proposed as a regularization technique for neural networks. The procedure is simple: At each iteration of training, randomly selected features are set to zero. We derive a regularization method for large-margin parameter learning based on dropout. Our method calculates the expected loss function under all possible dropout values. This method results in a simple objective function that is efficient to optimize. We extend dropout regularization to non-linear kernels in several different directions. We define the concept of dropout for input space, feature space, and input dimensions, and we introduce methods for approximate marginalization over feature space, even if the feature space is infinite-dimensional. Empirical evaluations show that our techniques consistently outperform the baselines on different datasets.
86

Data-Driven and Game-Theoretic Approaches for Privacy

January 2018 (has links)
abstract: In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers (businesses) to collect a large amount of data. However, this ``data collection" process can put the privacy of users at risk and also lead to user reluctance in accepting services or sharing data. This dissertation first investigates privacy sensitive consumer-retailers/service providers interactions under different scenarios, and then focuses on a unified framework for various information-theoretic privacy and privacy mechanisms that can be learned directly from data. Existing approaches such as differential privacy or information-theoretic privacy try to quantify privacy risk but do not capture the subjective experience and heterogeneous expression of privacy-sensitivity. The first part of this dissertation introduces models to study consumer-retailer interaction problems and to better understand how retailers/service providers can balance their revenue objectives while being sensitive to user privacy concerns. This dissertation considers the following three scenarios: (i) the consumer-retailer interaction via personalized advertisements; (ii) incentive mechanisms that electrical utility providers need to offer for privacy sensitive consumers with alternative energy sources; (iii) the market viability of offering privacy guaranteed free online services. We use game-theoretic models to capture the behaviors of both consumers and retailers, and provide insights for retailers to maximize their profits when interacting with privacy sensitive consumers. Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. In the second part, a novel context-aware privacy framework called generative adversarial privacy (GAP) is introduced. Inspired by recent advancements in generative adversarial networks, GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. For appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. Both synthetic and real-world datasets are used to show that GAP can greatly reduce the adversary's capability of inferring private information at a small cost of distorting the data. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
87

As provas não repetíveis no processo penal brasileiro / The non-repeatable evidence in criminal process

Camilla Brentel 15 June 2012 (has links)
O Código de Processo Penal brasileiro foi alterado em 2008 em decorrência da promulgação de algumas Leis Ordinárias. Uma delas (nº 11.690) prescreveu a modificação do artigo 155, a fim de regulamentar a aceitação de provas não repetíveis (e outras produzidas durante as investigações) para o convencimento do julgador. No entanto, como o legislador não atribuiu significado às provas não repetíveis, tampouco teceu esclarecimentos a respeito do modo como tais provas seriam compatibilizadas com o princípio constitucional do contraditório, há muitas incertezas sobre a disposição, que tem sido objeto de discussão pela comunidade jurídica. O silêncio do legislador impediu o desenvolvimento de uma regulação eficiente sobre o assunto. Com o objetivo de contribuir para as atuais discussões, propomos uma análise comparativa da doutrina sobre provas não repetíveis utilizada na Itália, país que serviu de inspiração à criação da norma brasileira. Por meio deste estudo, pretendemos: (i) clarificar o conceito de provas não repetíveis; (ii) analisar a interação do conceito de provas não repetíveis com outras provas produzidas durante as investigações; (iii) alcançar a compreensão do tratamento normativo e doutrinário das provas não repetíveis nos processos penais brasileiro e italiano; e (iv) refletir, à luz da das regras estabelecidas na Constituição Brasileira, se a regulamentação italiana sobre as provas não repetíveis teria aplicação no processo penal brasileiro. Depois de realizadas tais aferições, refletiremos sobre a necessidade de reformulação do artigo 155 que, se confirmada, nos levará à porposição de um novo texto normativo. / The Brazilian Criminal Procedure Code was altered in 2008 as a result of the adoption of some Ordinary Laws. One of them (nº. 11.690) prescribed amendments in article 155, which from then on stipulates the acceptance of non-repeatable evidence (as well as other types of evidence produced during investigations), as means of conviction. Nevertheless, as the legislator neither provided a definition of non-repeatable evidence nor instructed how this evidence should be treated in regards to the adversarial system of justice guaranteed by the Brazilian Constitution, there is a lot of uncertainty on the juridical community concerning this provision. The silence of the legislator deterred the development of an efficient regulation on the matter. Aiming to contribute to the current discussions, this work is focused on the comparative analysis of the doctrine of nonrepeatable evidence as applied in Italy, cradle of this idea. This study intends to: (i) clarify the concept of non-repeatable evidence; (ii) scrutinize the interaction of the concept of non-repeatable evidence with the further evidences produced during investigation; (iii) comprehend, in light of the Italian doctrine and the rules set forth in the Brazilian Constitution, the scope of application of the non-repeatable evidence; and (iv) analyze, bearing in mind the rules contained in the Brazilian Constitution, whether the system of non-repeatable evidence prescribed in Italy could also be applied in the Brazilian Criminal Procedure. After all these considerations are made, the crux of this work will be on whether article 155 should be rephrased and, if affirmative, how the new article should be worded.
88

Active Cleaning of Label Noise Using Support Vector Machines

Ekambaram, Rajmadhan 19 June 2017 (has links)
Large scale datasets collected using non-expert labelers are prone to labeling errors. Errors in the given labels or label noise affect the classifier performance, classifier complexity, class proportions, etc. It may be that a relatively small, but important class needs to have all its examples identified. Typical solutions to the label noise problem involve creating classifiers that are robust or tolerant to errors in the labels, or removing the suspected examples using machine learning algorithms. Finding the label noise examples through a manual review process is largely unexplored due to the cost and time factors involved. Nevertheless, we believe it is the only way to create a label noise free dataset. This dissertation proposes a solution exploiting the characteristics of the Support Vector Machine (SVM) classifier and the sparsity of its solution representation to identify uniform random label noise examples in a dataset. Application of this method is illustrated with problems involving two real-world large scale datasets. This dissertation also presents results for datasets that contain adversarial label noise. A simple extension of this method to a semi-supervised learning approach is also presented. The results show that most mislabels are quickly and effectively identified by the approaches developed in this dissertation.
89

Quantifying Information Leakage via Adversarial Loss Functions: Theory and Practice

January 2020 (has links)
abstract: Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This dissertation addresses this challenge. This work introduces a tunable leakage measure called maximal $\alpha$-leakage which quantifies the maximal gain of an adversary in inferring any function of a data set. The inferential capability of the adversary is modeled by a class of loss functions, namely, $\alpha$-loss. The choice of $\alpha$ determines specific adversarial actions ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for the two specific values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. Maximal $\alpha$-leakage is proved to have a composition property and be robust to side information. There is a fundamental disjoint between theoretical measures of information leakages and their applications in practice. This issue is addressed in the second part of this dissertation by proposing a data-driven framework for learning Censored and Fair Universal Representations (CFUR) of data. This framework is formulated as a constrained minimax optimization of the expected $\alpha$-loss where the constraint ensures a measure of the usefulness of the representation. The performance of the CFUR framework with $\alpha=1$ is evaluated on publicly accessible data sets; it is shown that multiple sensitive features can be effectively censored to achieve group fairness via demographic parity while ensuring accuracy for several \textit{a priori} unknown downstream tasks. Finally, focusing on worst-case measures, novel information-theoretic tools are used to refine the existing relationship between two such measures, $(\epsilon,\delta)$-DP and R\'enyi-DP. Applying these tools to the moments accountant framework, one can track the privacy guarantee achieved by adding Gaussian noise to Stochastic Gradient Descent (SGD) algorithms. Relative to state-of-the-art, for the same privacy budget, this method allows about 100 more SGD rounds for training deep learning models. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
90

Generalized Domain Adaptation for Visual Domains

January 2020 (has links)
abstract: Humans have a great ability to recognize objects in different environments irrespective of their variations. However, the same does not apply to machine learning models which are unable to generalize to images of objects from different domains. The generalization of these models to new data is constrained by the domain gap. Many factors such as image background, image resolution, color, camera perspective and variations in the objects are responsible for the domain gap between the training data (source domain) and testing data (target domain). Domain adaptation algorithms aim to overcome the domain gap between the source and target domains and learn robust models that can perform well across both the domains. This thesis provides solutions for the standard problem of unsupervised domain adaptation (UDA) and the more generic problem of generalized domain adaptation (GDA). The contributions of this thesis are as follows. (1) Certain and Consistent Domain Adaptation model for closed-set unsupervised domain adaptation by aligning the features of the source and target domain using deep neural networks. (2) A multi-adversarial deep learning model for generalized domain adaptation. (3) A gating model that detects out-of-distribution samples for generalized domain adaptation. The models were tested across multiple computer vision datasets for domain adaptation. The dissertation concludes with a discussion on the proposed approaches and future directions for research in closed set and generalized domain adaptation. / Dissertation/Thesis / Masters Thesis Computer Science 2020

Page generated in 0.1202 seconds