121 |
Učení založené na instancích / Instance based learningMartikán, Miroslav January 2009 (has links)
This thesis is specialized in instance based learning algorithms. Main goal is to create an application for educational purposes. There are instance based learning algorithms (IBL), nearest neighbor algorithms and kd-trees described theoretically in this thesis. Practical part is about making of tutorial application. Application can generate data, classified them with nearest neighbor algorithm and is able of IB1, IB2 and IB3 algorithm testing.
|
122 |
Investiční životní pojištění a způsoby jeho optimálního nastavení pro klienty společnosti Partners For Life Planning, a.s. / Investment Life Insurance and the Methods of Its Optimal Petting for Clients of Company Partners For Life Planning, Corp.Chromá, Zuzana January 2012 (has links)
This thesis is engaged in questions of investment live insurance and its practical aplication for selected group of 50 clients of Partners For Life Planning, Corp. The scope of the thesis is the optimization of a sum insured setting by a death insurance, permanent accidental disability, serious illness and disability for people in various life situations. It includes insurance company offer in scope of investment life insurance, their comparision for engaged criteria and discussion of results. And than it confronts these results with a group of 50 life insured people.
|
123 |
Základní otázky znalecké činnosti z hlediska rizik / Basic Issues of the Expertising from the Risck ManagementBílková, Zdeňka January 2016 (has links)
This master´s thesis deals with the identification of risks which affect the operations of experts activities in the Czech Republic. To obtain the necessary information there will be used a questionnaire in which the forensic experts in the field of Economy will be addressed.In the first part of the thesis there are described theoretical recourses which contains the explanation of basic concepts which are connected with identified risks. The second part of the thesis deals with the professional insurance of experts. By the SWOT analysis will be processed each expert knowledge on the topic of “The Basic issues of the expertising from the risk management.“ Finally will be evaluated the current state of expert activities in the Czech Republic and will be suggested possible measures.
|
124 |
Géographie de la justice pénale en France : L'équité à l'épreuve des territoires / Geography of Penal Justice in France : Equity tested in the territoriesCahu, Etienne 04 May 2017 (has links)
Cette thèse interroge l'apparente contradiction scalaire entre la proclamation de lois spatialement uniforme à l'échelle nationale et la territorialisation des populations françaises. Elle essaye de comprendre comment le système judiciaire hexagonal réussit à concilier ses exigences constitutionnelles d'indivisibilité et d'égalité avec la pluralité des territoires. L'analyse, enchevêtrant une démarche qualitative et une démarche quantitative par l'exploitation des données des juridictions et du casier judiciaire national, permet de conclure que les institutions judiciaires sont productrices d'injustices. Plus ou moins asphyxiés par les flux de délits à réprimer, les tribunaux de grande instance ne condamnent pas uniformément à l'échelle nationale, d'autant plus qu'ils doivent suivre les priorités définies dans la politique pénale du procureur de la République. Possédant une propension diverse à devenir de véritables acteurs politiques de leur territoire, les chefs du parquet jouent ainsi un rôle essentiel de passeur scalaire mais accentuent l'iniquité du système pénal. En effet, l'égalité proclamée comme un des fondements de la République française est abandonnée au profit d'une stigmatisation des territoires les plus défavorisés qui sont plus sévèrement condamnés que ne laisserait attendre la géographie des délits alors même qu'ils sont oubliés dans les politiques de prévention de la délinquance. Rompant dès lors complètement avec le principe de l'équité, les institutions judiciaires accentuent les fractures socio-spatiales du territoire français. Ces processus de fragmentation révèlent d'une part la pertinence de l'analyse de la justice pénale par la géographie et d'autre part l'impossibilité de ne penser une amélioration du système judiciaire qu'en vase-clos. / This thesis questions the seeming scalar contradiction between the spatially uniform proclaiming of laws on a national scale, and the public policy approach that focuses on the territory of French populations. It tries to understand how the French judiciary system succeeds in conciliating its constitutional demands of indivisibility and equality together with the plurality of territories. The analysis, mixing a quality and a quantitative approach by using the data of jurisdictions and the national police recrod, allows concluding that judiciary institutions produce injustices. More or less suffocated by the number of offences to be punished, the district courts do not condemn uniformly on the national level, all the more since they must follow the priorities defined in the penal policy by the public prosecutor. Having a different propensity to become genuine political actors in their territory, the public prosecutors are thus playing an essential part as scalar linkmen, but stress the iniquity of the penal system. Indeed, the equality claimed as a base of the French Republic, is forgotten in the benfit of a stigmatization of the most disadvantaged territories, which are more severely condemned that the geography of offences would expect, even when they are forgotten in the policies of deliquency preventing. Thus breaking totally with the principle of equity, the judiciary institutions stress the social-spatial dislocations of the French territory. These processes of fragmentation disclose that on the one hand, an analysis of the penal justice through geography is relevant, and on the other hand that it is impossible for a closed group only to think the improvement of the judiciary system.
|
125 |
Energy-efficient Benchmarking for Energy-efficient SoftwarePukhkaiev, Dmytro 14 January 2016 (has links)
With respect to the continuous growth of computing systems, the energy-efficiency requirement of their processes becomes even more important. Different configurations, implying different energy-efficiency of the system, could be used to perform the process. A configuration denotes the choice among different hard- and software settings (e.g., CPU frequency, number of threads, the concrete algorithm, etc.). The identification of the most energy-efficient configuration demands to benchmark all configurations. However, this benchmarking is time- and energy-consuming, too. This thesis explores (a) the effect of dynamic voltage and frequency scaling (DVFS) in combination with dynamic concurrency throttling (DCT) on the energy consumption of (de)compression, DBMS query executions, encryption/decryption and sorting; and (b) a generic approach to reduce the benchmarking efforts to determine the optimal configuration. Our findings show that the utilization of optimal configurations can save wavg. 15.14% of energy compared to the default configuration. Moreover, we propose a generic heuristic (fractional factorial design) that utilizes data mining (adaptive instance selection) together with machine learning techniques (multiple linear regression) to decrease benchmarking effort by building a regression model based on the smallest feasible subset of the benchmarked configurations. Our approach reduces the energy consumption required for benchmarking by 63.9% whilst impairing the energy-efficiency of performing the computational process by only 1.88 pp, due to not using the optimal but a near-optimal configuration.
|
126 |
Using Mask R-CNN for Instance Segmentation of Eyeglass Lenses / Användning av Mask R-CNN för instanssegmentering av glasögonlinserNorrman, Marcus, Shihab, Saad January 2021 (has links)
This thesis investigates the performance of Mask R-CNN when utilizing transfer learning on a small dataset. The aim was to instance segment eyeglass lenses as accurately as possible from self-portrait images. Five different models were trained, where the key difference was the types of eyeglasses the models were trained on. The eyeglasses were grouped into three types, fully rimmed, semi-rimless, and rimless glasses. 1550 images were used for training, validation, and testing. The model's performances were evaluated using TensorBoard training data and mean Intersection over Union scores (mIoU). No major differences in performance were found in four of the models, which grouped all three types of glasses into one class. Their mIoU scores range from 0.913 to 0.94 whereas the model with one class for each group of glasses, performed worse, with a mIoU of 0.85. The thesis revealed that one can achieve great instance segmentation results using a limited dataset when taking advantage of transfer learning. / Denna uppsats undersöker prestandan för Mask R-CNN vid användning av överföringsinlärning på en liten datamängd. Syftet med arbetet var att segmentera glasögonlinser så exakt som möjligt från självporträttbilder. Fem olika modeller tränades, där den viktigaste skillnaden var de typer av glasögon som modellerna tränades på. Glasögonen delades in i 3 typer, helbåge, halvbåge och båglösa. Totalt samlades 1550 träningsbilder in, dessa annoterades och användes för att träna modellerna. Modellens prestanda utvärderades med TensorBoard träningsdata samt genomsnittlig Intersection over Union (IoU). Inga större skillnader i prestanda hittades mellan modellerna som endast tränades på en klass av glasögon. Deras genomsnittliga IoU varierar mellan 0,913 och 0,94. Modellen där varje glasögonkategori representerades som en unik klass, presterade sämre med en genomsnittlig IoU på 0,85. Resultatet av uppsatsen påvisar att goda instanssegmenteringsresultat går att uppnå med hjälp av en begränsad datamängd om överföringsinlärning används.
|
127 |
Decentralized Coordination of Dynamic Software Updates in the Internet of ThingsWeißbach, Martin, Taing, Nguonly, Wutzler, Markus, Springer, Thomas, Schill, Alexander, Clarke, Siobhán 01 July 2021 (has links)
Large scale IoT service deployments run on a high number of distributed, interconnected computing nodes comprising sensors, actuators, gateways and cloud infrastructure. Since IoT is a fast growing, dynamic domain, the implementation of software components are subject to frequent changes addressing bug fixes, quality insurance or changed requirements. To ensure the continuous monitoring and control of processes, software updates have to be conducted while the nodes are operating without losing any sensed data or actuator instructions. Current IoT solutions usually support the centralized management and automated deployment of updates but are restricted to broadcasting the updates and local update processes at all nodes. In this paper we propose an update mechanism for IoT deployments that considers dependencies between services across multiple nodes involved in a common service and supports a coordinated update of component instances on distributed nodes. We rely on LyRT on all IoT nodes as the runtime supporting local disruption-minimal software updates. Our proposed middleware layer coordinates updates on a set of distributed nodes. We evaluated our approach using a demand response scenario from the smart grid domain.
|
128 |
Deep learning compact and invariant image representations for instance retrieval / Représentations compactes et invariantes à l'aide de l'apprentissage profond pour la recherche d'images par similaritéMorère, Olivier André Luc 08 July 2016 (has links)
Nous avons précédemment mené une étude comparative entre les descripteurs FV et CNN dans le cadre de la recherche par similarité d’instance. Cette étude montre notamment que les descripteurs issus de CNN manquent d’invariance aux transformations comme les rotations ou changements d’échelle. Nous montrons dans un premier temps comment des réductions de dimension (“pooling”) appliquées sur la base de données d’images permettent de réduire fortement l’impact de ces problèmes. Certaines variantes préservent la dimensionnalité des descripteurs associés à une image, alors que d’autres l’augmentent, au prix du temps d’exécution des requêtes. Dans un second temps, nous proposons la réduction de dimension emboitée pour l’invariance (NIP), une méthode originale pour la production, à partir de descripteurs issus de CNN, de descripteurs globaux invariants à de multiples transformations. La méthode NIP est inspirée de la théorie pour l’invariance “i-theory”, une théorie mathématique proposée il y a peu pour le calcul de transformations invariantes à des groupes au sein de réseaux de neurones acycliques. Nous montrons que NIP permet d’obtenir des descripteurs globaux compacts (mais non binaires) et robustes aux rotations et aux changements d’échelle, que NIP est plus performants que les autres méthodes à dimensionnalité équivalente sur la plupart des bases de données d’images. Enfin, nous montrons que la combinaison de NIP avec la méthode de hachage RBMH proposée précédemment permet de produire des codes binaires à la fois compacts et invariants à plusieurs types de transformations. La méthode NIP+RBMH, évaluée sur des bases de données d’images de moyennes et grandes échelles, se révèle plus performante que l’état de l’art, en particulier dans le cas de descripteurs binaires de très petite taille (de 32 à 256 bits). / Image instance retrieval is the problem of finding an object instance present in a query image from a database of images. Also referred to as particular object retrieval, this problem typically entails determining with high precision whether the retrieved image contains the same object as the query image. Scale, rotation and orientation changes between query and database objects and background clutter pose significant challenges for this problem. State-of-the-art image instance retrieval pipelines consist of two major steps: first, a subset of images similar to the query are retrieved from the database, and second, Geometric Consistency Checks (GCC) are applied to select the relevant images from the subset with high precision. The first step is based on comparison of global image descriptors: high-dimensional vectors with up to tens of thousands of dimensions rep- resenting the image data. The second step is computationally highly complex and can only be applied to hundreds or thousands of images in practical applications. More discriminative global descriptors result in relevant images being more highly ranked, resulting in fewer images that need to be compared pairwise with GCC. As a result, better global descriptors are key to improving retrieval performance and have been the object of much recent interest. Furthermore, fast searches in large databases of millions or even billions of images requires the global descriptors to be compressed into compact representations. This thesis will focus on how to achieve extremely compact global descriptor representations for large-scale image instance retrieval. After introducing background concepts about supervised neural networks, Restricted Boltzmann Machine (RBM) and deep learning in Chapter 2, Chapter 3 will present the design principles and recent work for the Convolutional Neural Networks (CNN), which recently became the method of choice for large-scale image classification tasks. Next, an original multistage approach for the fusion of the output of multiple CNN is proposed. Submitted as part of the ILSVRC 2014 challenge, results show that this approach can significantly improve classification results. The promising perfor- mance of CNN is largely due to their capability to learn appropriate high-level visual representations from the data. Inspired by a stream of recent works showing that the representations learnt on one particular classification task can transfer well to other classification tasks, subsequent chapters will focus on the transferability of representa- tions learnt by CNN to image instance retrieval…
|
129 |
Learning to Measure Invisible FishGustafsson, Stina January 2022 (has links)
In recent years, the EU has observed a decrease in the stocks of certain fish species due to unrestricted fishing. To combat the problem, many fisheries are investigating how to automatically estimate the catch size and composition using sensors onboard the vessels. Yet, measuring the size of fish in marine imagery is a difficult task. The images generally suffer from complex conditions caused by cluttered fish, motion blur and dirty sensors. In this thesis, we propose a novel method for automatic measurement of fish size that can enable measuring both visible and occluded fish. We use a Mask R-CNN to segment the visible regions of the fish, and then fill in the shape of the occluded fish using a U-Net. We train the U-Net to perform shape completion in a semi-supervised manner, by simulating occlusions on an open-source fish dataset. Different to previous shape completion work, we teach the U-Net when to fill in the shape and not by including a small portion of fully visible fish in the input training data. Our results show that our proposed method succeeds to fill in the shape of the synthetically occluded fish as well as of some of the cluttered fish in real marine imagery. We achieve an mIoU score of 93.9 % on 1 000 synthetic test images and present qualitative results on real images captured onboard a fishing vessel. The qualitative results show that the U-Net can fill in the shapes of lightly occluded fish, but struggles when the tail fin is hidden and only parts of the fish body is visible. This task is difficult even for a human, and the performance could perhaps be increased by including the fish appearance in the shape completion task. The simulation-to-reality gap could perhaps also be reduced by finetuning the U-Net on some real occlusions, which could increase the performance on the heavy occlusions in the real marine imagery.
|
130 |
Depth-Aware Deep Learning Networks for Object Detection and Image SegmentationDickens, James 01 September 2021 (has links)
The rise of convolutional neural networks (CNNs) in the context of computer vision
has occurred in tandem with the advancement of depth sensing technology.
Depth cameras are capable of yielding two-dimensional arrays storing at each pixel
the distance from objects and surfaces in a scene from a given sensor, aligned with
a regular color image, obtaining so-called RGBD images. Inspired by prior models
in the literature, this work develops a suite of RGBD CNN models to tackle
the challenging tasks of object detection, instance segmentation, and semantic
segmentation. Prominent architectures for object detection and image segmentation
are modified to incorporate dual backbone approaches inputting RGB and
depth images, combining features from both modalities through the use of novel
fusion modules. For each task, the models developed are competitive with state-of-the-art RGBD architectures. In particular, the proposed RGBD object detection
approach achieves 53.5% mAP on the SUN RGBD 19-class object detection
benchmark, while the proposed RGBD semantic segmentation architecture yields
69.4% accuracy with respect to the SUN RGBD 37-class semantic segmentation
benchmark. An original 13-class RGBD instance segmentation benchmark is introduced for the SUN RGBD dataset, for which the proposed model achieves 38.4%
mAP. Additionally, an original depth-aware panoptic segmentation model is developed, trained, and tested for new benchmarks conceived for the NYUDv2 and
SUN RGBD datasets. These benchmarks offer researchers a baseline for the task
of RGBD panoptic segmentation on these datasets, where the novel depth-aware
model outperforms a comparable RGB counterpart.
|
Page generated in 0.0624 seconds