101 |
Řízení o přestupcích v prvním stupni / First instance hearing of administrative delictsNováková, Hana January 2016 (has links)
The subject matter of this thesis is the first instance hearing of administrative delicts, where the main focus is on the legal status of the accused. Since the adjudgement on guilt of the accused is the fundamental objective of the administrative infraction proceedings, it is not conceivable that such proceedings would be conducted in his/her absence. The accused is the essential subject whose rights and obligations are mandatorily adjudicated. Part 3 of the Act No. 200/1990 Coll., on Transgressions, is the main source of legal regulation of administrative infraction proceedings, while the Act No. 500/2004 Coll., Administrative Procedure Code, shall be used subsidiarily. These laws represent the basic legal framework for a proper conduction of administrative infraction proceedings. However it is also necessary to apply a wide range of legal principles arising out of the constitutional law and the international law. The European Convention on Human Rights plays a pivotal role since it guarantees the right to a fair trial, together with the presumption of innocence and the right of self-defense, as two integral parts of the right to a fair trial. This thesis analyses the individual procedural rights of the accused in detail, along with their classification into appropriate stages of the...
|
102 |
Algoritmos anytime baseados em instâncias para classificação em fluxo de dados / Instance-based anytime algorithm to data stream classificationLemes, Cristiano Inácio 09 March 2016 (has links)
Aprendizado em fluxo de dados é uma área de pesquisa importante e que vem crescendo nos últimos tempos. Em muitas aplicações reais os dados são gerados em uma sequência temporal potencialmente infinita. O processamento em fluxo possui como principal característica a necessidade por respostas que atendam restrições severas de tempo e memória. Por exemplo, um classificador aplicado a um fluxo de dados deve prover uma resposta a um determinado evento antes que o próximo evento ocorra. Caso isso não ocorra, alguns eventos do fluxo podem ficar sem classificação. Muitos fluxos geram eventos em uma taxa de chegada com grande variabilidade, ou seja, o intervalo de tempo de ocorrência entre dois eventos sucessivos pode variar muito. Para que um sistema de aprendizado obtenha sucesso na aquisição de conhecimento é preciso que ele apresente duas características principais: (i) ser capaz de prover uma classificação para um novo exemplo em tempo hábil e (ii) ser capaz de adaptar o modelo de classificação de maneira a tratar mudanças de conceito, uma vez que os dados podem não apresentar uma distribuição estacionária. Algoritmos de aprendizado de máquina em lote não possuem essas propriedades, pois assumem que as distribuições são estacionárias e não estão preparados para atender restrições de memória e processamento. Para atender essas necessidades, esses algoritmos devem ser adaptados ao contexto de fluxo de dados. Uma possível adaptação é tornar o algoritmo de classificação anytime. Algoritmos anytime são capazes de serem interrompidos e prover uma resposta (classificação) aproximada a qualquer instante. Outra adaptação é tornar o algoritmo incremental, de maneira que seu modelo possa ser atualizado para novos exemplos do fluxo de dados. Neste trabalho é realizada a investigação de dois métodos capazes de realizar o aprendizado em um fluxo de dados. O primeiro é baseado no algoritmo k-vizinhos mais próximo anytime estado-da-arte, onde foi proposto um novo método de desempate para ser utilizado neste algoritmo. Os experimentos mostraram uma melhora consistente no desempenho deste algoritmo em várias bases de dados de benchmark. O segundo método proposto possui as características dos algoritmos anytime e é capaz de tratar a mudança de conceito nos dados. Este método foi chamado de Algoritmo Anytime Incremental e possui duas versões, uma baseado no algoritmo Space Saving e outra em uma Janela Deslizante. Os experimentos mostraram que em cada fluxo cada versão deste método proposto possui suas vantagens e desvantagens. Mas no geral, comparado com outros métodos baselines, ambas as versões apresentaram melhor desempenho. / Data stream learning is a very important research field that has received much attention from the scientific community. In many real-world applications, data is generated as potentially infinite temporal sequences. The main characteristic of stream processing is to provide answers observing stringent restrictions of time and memory. For example, a data stream classifier must provide an answer for each event before the next one arrives. If this does not occur, some events from the data stream may be left unclassified. Many streams generate events with highly variable output rate, i.e. the time interval between two consecutive events may vary greatly. For a learning system to be successful, two properties must be satisfied: (i) it must be able to provide a classification for a new example in a short time and (ii) it must be able to adapt the classification model to treat concept change, since the data may not follow a stationary distribution. Batch machine learning algorithms do not satisfy those properties because they assume that the distribution is stationary and they are not prepared to operate with severe memory and processing constraints. To satisfy these requirements, these algorithms must be adapted to the data stream context. One possible adaptation is to turn the algorithm into an anytime classifier. Anytime algorithms may be interrupted and still provide an approximated answer (classification) at any time. Another adaptation is to turn the algorithm into an incremental classifier so that its model may be updated with new examples from the data stream. In this work, it is performed an evaluation of two approaches for data stream learning. The first one is based on a state-of-the-art k-nearest neighbor anytime classifier. A new tiebreak approach is proposed to be used with this algorithm. Experiments show consistently better results in the performance of this algorithm in many benchmark data sets. The second proposed approach is to adapt the anytime algorithm for concept change. This approach was called Incremental Anytime Algorithm, and it was designed with two versions. One version is based on the Space Saving algorithm and the other is based in a Sliding Window. Experiments show that both versions are significantly better than baseline approaches.
|
103 |
USING MACHINE LEARNING TECHNIQUES TO IMPROVE STATIC CODE ANALYSIS TOOLS USEFULNESSEnas Ahmad Alikhashashneh (7013450) 16 October 2019 (has links)
<p>This dissertation proposes an approach to reduce the cost of manual inspections for as large a number of false positive warnings that are being reported by Static Code Analysis (SCA) tools as much as possible using Machine Learning (ML) techniques. The proposed approach neither assume to use the particular SCA tools nor depends on the specific programming language used to write the target source code or the application. To reduce the number of false positive warnings we first evaluated a number of SCA tools in terms of software engineering metrics using a highlighted synthetic source code named the Juliet test suite. From this evaluation, we concluded that the SCA tools report plenty of false positive warnings that need a manual inspection. Then we generated a number of datasets from the source code that forced the SCA tool to generate either true positive, false positive, or false negative warnings. The datasets, then, were used to train four of ML classifiers in order to classify the collected warnings from the synthetic source code. From the experimental results of the ML classifiers, we observed that the classifier that built using the Random Forests</p>
<p>(RF) technique outperformed the rest of the classifiers. Lastly, using this classifier and an instance-based transfer learning technique, we ranked a number of warnings that were aggregated from various open-source software projects. The experimental results show that the proposed approach to reduce the cost of the manual inspection of the false positive warnings outperformed the random ranking algorithm and was highly correlated with the ranked list that the optimal ranking algorithm generated.</p>
|
104 |
Novel Support Vector Machines for Diverse Learning ParadigmsMelki, Gabriella A 01 January 2018 (has links)
This dissertation introduces novel support vector machines (SVM) for the following traditional and non-traditional learning paradigms: Online classification, Multi-Target Regression, Multiple-Instance classification, and Data Stream classification.
Three multi-target support vector regression (SVR) models are first presented. The first involves building independent, single-target SVR models for each target. The second builds an ensemble of randomly chained models using the first single-target method as a base model. The third calculates the targets' correlations and forms a maximum correlation chain, which is used to build a single chained SVR model, improving the model's prediction performance, while reducing computational complexity.
Under the multi-instance paradigm, a novel SVM multiple-instance formulation and an algorithm with a bag-representative selector, named Multi-Instance Representative SVM (MIRSVM), are presented. The contribution trains the SVM based on bag-level information and is able to identify instances that highly impact classification, i.e. bag-representatives, for both positive and negative bags, while finding the optimal class separation hyperplane. Unlike other multi-instance SVM methods, this approach eliminates possible class imbalance issues by allowing both positive and negative bags to have at most one representative, which constitute as the most contributing instances to the model.
Due to the shortcomings of current popular SVM solvers, especially in the context of large-scale learning, the third contribution presents a novel stochastic, i.e. online, learning algorithm for solving the L1-SVM problem in the primal domain, dubbed OnLine Learning Algorithm using Worst-Violators (OLLAWV). This algorithm, unlike other stochastic methods, provides a novel stopping criteria and eliminates the need for using a regularization term. It instead uses early stopping. Because of these characteristics, OLLAWV was proven to efficiently produce sparse models, while maintaining a competitive accuracy.
OLLAWV's online nature and success for traditional classification inspired its implementation, as well as its predecessor named OnLine Learning Algorithm - List 2 (OLLA-L2), under the batch data stream classification setting. Unlike other existing methods, these two algorithms were chosen because their properties are a natural remedy for the time and memory constraints that arise from the data stream problem. OLLA-L2's low spacial complexity deals with memory constraints imposed by the data stream setting, and OLLAWV's fast run time, early self-stopping capability, as well as the ability to produce sparse models, agrees with both memory and time constraints. The preliminary results for OLLAWV showed a superior performance to its predecessor and was chosen to be used in the final set of experiments against current popular data stream methods.
Rigorous experimental studies and statistical analyses over various metrics and datasets were conducted in order to comprehensively compare the proposed solutions against modern, widely-used methods from all paradigms. The experimental studies and analyses confirm that the proposals achieve better performances and more scalable solutions than the methods compared, making them competitive in their respected fields.
|
105 |
Apprentissage autosupervisé de modèles prédictifs de segmentation à partir de vidéos / Self-supervised learning of predictive segmentation models from videoLuc, Pauline 25 June 2019 (has links)
Les modèles prédictifs ont le potentiel de permettre le transfert des succès récents en apprentissage par renforcement à de nombreuses tâches du monde réel, en diminuant le nombre d’interactions nécessaires avec l’environnement.La tâche de prédiction vidéo a attiré un intérêt croissant de la part de la communauté ces dernières années, en tant que cas particulier d’apprentissage prédictif dont les applications en robotique et dans les systèmes de navigations sont vastes.Tandis que les trames RGB sont faciles à obtenir et contiennent beaucoup d’information, elles sont extrêmement difficile à prédire, et ne peuvent être interprétées directement par des applications en aval.C’est pourquoi nous introduisons ici une tâche nouvelle, consistant à prédire la segmentation sémantique ou d’instance de trames futures.Les espaces de descripteurs que nous considérons sont mieux adaptés à la prédiction récursive, et nous permettent de développer des modèles de segmentation prédictifs performants jusqu’à une demi-seconde dans le futur.Les prédictions sont interprétables par des applications en aval et demeurent riches en information, détaillées spatialement et faciles à obtenir, en s’appuyant sur des méthodes état de l’art de segmentation.Dans cette thèse, nous nous attachons d’abord à proposer pour la tâche de segmentation sémantique, une approche discriminative se basant sur un entrainement par réseaux antagonistes.Ensuite, nous introduisons la tâche nouvelle de prédiction de segmentation sémantique future, pour laquelle nous développons un modèle convolutionnel autoregressif.Enfin, nous étendons notre méthode à la tâche plus difficile de prédiction de segmentation d’instance future, permettant de distinguer entre différents objets.Du fait du nombre de classes variant selon les images, nous proposons un modèle prédictif dans l’espace des descripteurs d’image convolutionnels haut niveau du réseau de segmentation d’instance Mask R-CNN.Cela nous permet de produire des segmentations visuellement plaisantes en haute résolution, pour des scènes complexes comportant un grand nombre d’objets, et avec une performance satisfaisante jusqu’à une demi seconde dans le futur. / Predictive models of the environment hold promise for allowing the transfer of recent reinforcement learning successes to many real-world contexts, by decreasing the number of interactions needed with the real world.Video prediction has been studied in recent years as a particular case of such predictive models, with broad applications in robotics and navigation systems.While RGB frames are easy to acquire and hold a lot of information, they are extremely challenging to predict, and cannot be directly interpreted by downstream applications.Here we introduce the novel tasks of predicting semantic and instance segmentation of future frames.The abstract feature spaces we consider are better suited for recursive prediction and allow us to develop models which convincingly predict segmentations up to half a second into the future.Predictions are more easily interpretable by downstream algorithms and remain rich, spatially detailed and easy to obtain, relying on state-of-the-art segmentation methods.We first focus on the task of semantic segmentation, for which we propose a discriminative approach based on adversarial training.Then, we introduce the novel task of predicting future semantic segmentation, and develop an autoregressive convolutional neural network to address it.Finally, we extend our method to the more challenging problem of predicting future instance segmentation, which additionally segments out individual objects.To deal with a varying number of output labels per image, we develop a predictive model in the space of high-level convolutional image features of the Mask R-CNN instance segmentation model.We are able to produce visually pleasing segmentations at a high resolution for complex scenes involving a large number of instances, and with convincing accuracy up to half a second ahead.
|
106 |
Le principe de proportionnalité devant la cour de justice et le tribunal de première instance des communautés européennesVieu-Planchon, Marie-Hélène 13 December 2000 (has links) (PDF)
Dégagé par la Cour de justice des Communautés européennes, le principe de proportionnalité exige que les actes des institutions communautaires soient aptes à réaliser l'objectif visé et ne dépassent pas les limites de ce qui est nécéssaire à cet effet. De cette construction prétorienne se dégage l'autonomie du principe avec ses caractères de flexibilité et de complémentarité qui en font une règle ambivalente susceptible de contrôler différemment les actes communautaires mais aussi les actes nationaux. L'analyse de la jurisprudence révèle les formes plurielles du principe de proportionnalité dont les termes du rapport établi sont variables et dont le contenu est modulable, même si la mise en oeuvre de ce principe recouvre le plus souvent, mais pas exclusivement, un contrôle de nécessité et/ou un contrôle d'appropriation. La réelle cohérence de la mise en oeuvre du principe manifeste la volonté du juge d'en faire l'instrument d'une politique jurisprudentielle au service d'un processus d'intégration engagé par la création des Communautés européennes. [résumé de l'auteur]
|
107 |
A probabilistic framework and algorithms for modeling and analyzing multi-instance dataBehmardi, Behrouz 28 November 2012 (has links)
Multi-instance data, in which each object (e.g., a document) is a collection of instances
(e.g., word), are widespread in machine learning, signal processing, computer vision,
bioinformatic, music, and social sciences. Existing probabilistic models, e.g., latent
Dirichlet allocation (LDA), probabilistic latent semantic indexing (pLSI), and discrete
component analysis (DCA), have been developed for modeling and analyzing multiinstance
data. Such models introduce a generative process for multi-instance data which
includes a low dimensional latent structure. While such models offer a great freedom
in capturing the natural structure in the data, their inference may present challenges.
For example, the sensitivity in choosing the hyper-parameters in such models, requires
careful inference (e.g., through cross-validation) which results in large computational
complexity. The inference for fully Bayesian models which contain no hyper-parameters
often involves slowly converging sampling methods. In this work, we develop approaches
for addressing such challenges and further enhancing the utility of such models.
This dissertation demonstrates a unified convex framework for probabilistic modeling
of multi-instance data. The three main aspects of the proposed framework are as follows.
First, joint regularization is incorporated into multiple density estimation to simultaneously
learn the structure of the distribution space and infer each distribution. Second,
a novel confidence constraints framework is used to facilitate a tuning-free approach to
control the amount of regularization required for the joint multiple density estimation
with theoretical guarantees on correct structure recovery. Third, we formulate the problem
using a convex framework and propose efficient optimization algorithms to solve
it.
This work addresses the unique challenges associated with both discrete and continuous
domains. In the discrete domain we propose a confidence-constrained rank minimization
(CRM) to recover the exact number of topics in topic models with theoretical
guarantees on recovery probability and mean squared error of the estimation. We provide
a computationally efficient optimization algorithm for the problem to further the
applicability of the proposed framework to large real world datasets. In the continuous
domain, we propose to use the maximum entropy (MaxEnt) framework for multi-instance
datasets. In this approach, bags of instances are represented as distributions using the
principle of MaxEnt. We learn basis functions which span the space of distributions for
jointly regularized density estimation. The basis functions are analogous to topics in a
topic model.
We validate the efficiency of the proposed framework in the discrete and continuous
domains by extensive set of experiments on synthetic datasets as well as on real world
image and text datasets and compare the results with state-of-the-art algorithms. / Graduation date: 2013
|
108 |
Editing model based on the object-oriented approachWatanabe, Toyohide, Yoshida, Yuuji, Fukumura, Teruo 10 1900 (has links)
No description available.
|
109 |
Personalised ontology learning and mining for web information gatheringTao, Xiaohui January 2009 (has links)
Over the last decade, the rapid growth and adoption of the World Wide Web has further exacerbated user needs for e±cient mechanisms for information and knowledge location, selection, and retrieval. How to gather useful and meaningful information from the Web becomes challenging to users. The capture of user information needs is key to delivering users' desired information, and user pro¯les can help to capture information needs. However, e®ectively acquiring user pro¯les is di±cult. It is argued that if user background knowledge can be speci¯ed by ontolo- gies, more accurate user pro¯les can be acquired and thus information needs can be captured e®ectively. Web users implicitly possess concept models that are obtained from their experience and education, and use the concept models in information gathering. Prior to this work, much research has attempted to use ontologies to specify user background knowledge and user concept models. However, these works have a drawback in that they cannot move beyond the subsumption of super - and sub-class structure to emphasising the speci¯c se- mantic relations in a single computational model. This has also been a challenge for years in the knowledge engineering community. Thus, using ontologies to represent user concept models and to acquire user pro¯les remains an unsolved problem in personalised Web information gathering and knowledge engineering. In this thesis, an ontology learning and mining model is proposed to acquire user pro¯les for personalised Web information gathering. The proposed compu- tational model emphasises the speci¯c is-a and part-of semantic relations in one computational model. The world knowledge and users' Local Instance Reposito- ries are used to attempt to discover and specify user background knowledge. From a world knowledge base, personalised ontologies are constructed by adopting au- tomatic or semi-automatic techniques to extract user interest concepts, focusing on user information needs. A multidimensional ontology mining method, Speci- ¯city and Exhaustivity, is also introduced in this thesis for analysing the user background knowledge discovered and speci¯ed in user personalised ontologies. The ontology learning and mining model is evaluated by comparing with human- based and state-of-the-art computational models in experiments, using a large, standard data set. The experimental results are promising for evaluation. The proposed ontology learning and mining model in this thesis helps to develop a better understanding of user pro¯le acquisition, thus providing better design of personalised Web information gathering systems. The contributions are increasingly signi¯cant, given both the rapid explosion of Web information in recent years and today's accessibility to the Internet and the full text world.
|
110 |
Computational Methods for Perceptual Training in RadiologyJanuary 2012 (has links)
abstract: Medical images constitute a special class of images that are captured to allow diagnosis of disease, and their "correct" interpretation is vitally important. Because they are not "natural" images, radiologists must be trained to visually interpret them. This training process includes implicit perceptual learning that is gradually acquired over an extended period of exposure to medical images. This dissertation proposes novel computational methods for evaluating and facilitating perceptual training in radiologists. Part 1 of this dissertation proposes an eye-tracking-based metric for measuring the training progress of individual radiologists. Six metrics were identified as potentially useful: time to complete task, fixation count, fixation duration, consciously viewed regions, subconsciously viewed regions, and saccadic length. Part 2 of this dissertation proposes an eye-tracking-based entropy metric for tracking the rise and fall in the interest level of radiologists, as they scan chest radiographs. The results showed that entropy was significantly lower when radiologists were fixating on abnormal regions. Part 3 of this dissertation develops a method that allows extraction of Gabor-based feature vectors from corresponding anatomical regions of "normal" chest radiographs, despite anatomical variations across populations. These feature vectors are then used to develop and compare transductive and inductive computational methods for generating overlay maps that show atypical regions within test radiographs. The results show that the transductive methods produced much better maps than the inductive methods for 20 ground-truthed test radiographs. Part 4 of this dissertation uses an Extended Fuzzy C-Means (EFCM) based instance selection method to reduce the computational cost of transductive methods. The results showed that EFCM substantially reduced the computational cost without a substantial drop in performance. The dissertation then proposes a novel Variance Based Instance Selection (VBIS) method that also reduces the computational cost, but allows for incremental incorporation of new informative radiographs, as they are encountered. Part 5 of this dissertation develops and demonstrates a novel semi-transductive framework that combines the superior performance of transductive methods with the reduced computational cost of inductive methods. The results showed that the semi-transductive approach provided both an effective and efficient framework for detection of atypical regions in chest radiographs. / Dissertation/Thesis / Ph.D. Computer Science 2012
|
Page generated in 0.0605 seconds