• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 20
  • 8
  • 7
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 129
  • 24
  • 22
  • 22
  • 20
  • 18
  • 18
  • 15
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Functional Verification of Arithmetic Circuits using Linear Algebra Methods

Ameer Abdul Kader, Mohamed Basith Abdul 01 January 2011 (has links) (PDF)
This thesis describes an efficient method for speeding up functional verification of arithmetic circuits namely linear network such as wallace trees, counters using linear algebra techniques. The circuit is represented as a network of half adders, full adders and inverters, and modeled as a system of linear equations. The proof of functional correctness of the design is obtained by computing its algebraic signature using standard linear programming (LP) solver and comparing it with the reference signature provided by the designer. Initial experimental results and comparison with Satisfiability Modulo Theorem (SMT) solvers show that the method is efficient, scalable and applicable to complex arithmetic designs, including large multipliers. It is intended to provide a new front end theory/engine to enhance SMT solvers.
62

Contention-Aware Scheduling for SMT Multicore Processors

Feliu Pérez, Josué 27 March 2017 (has links)
The recent multicore era and the incoming manycore/manythread era generate a lot of challenges for computer scientists going from productive parallel programming, over network congestion avoidance and intelligent power management, to circuit design issues. The ultimate goal is to squeeze out as much performance as possible while limiting power and energy consumption and guaranteeing a reliable execution. The increasing number of hardware contexts of current and future systems makes the scheduler an important component to achieve this goal, as there is often a combinatorial amount of different ways to schedule the distinct threads or applications, each with a different performance due to the inter-application interference. Picking an optimal schedule can result in substantial performance gains. This thesis deals with inter-application interference, covering the problems this fact causes on performance and fairness on actual machines. The study starts with single-threaded multicore processors (Intel Xeon X3320), follows with simultaneous multithreading (SMT) multicores supporting up to two threads per core (Intel Xeon E5645), and goes to the most highly threaded per-core processor that has ever been built (IBM POWER8). The dissertation analyzes the main contention points of each experimental platform and proposes scheduling algorithms that tackle the interference arising at each contention point to improve the system throughput and fairness. First we analyze contention through the memory hierarchy of current multicore processors. The performed studies reveal high performance degradation due to contention on main memory and any shared cache the processors implement. To mitigate such contention, we propose different bandwidth-aware scheduling algorithms with the key idea of balancing the memory accesses through the workload execution time and the cache requests among the different caches at each cache level. The high interference that different applications suffer when running simultaneously on the same SMT core, however, does not only affect performance, but can also compromise system fairness. In this dissertation, we also analyze fairness in current SMT multicores. To improve system fairness, we design progress-aware scheduling algorithms that estimate, at runtime, how the processes progress, which allows to improve system fairness by prioritizing the processes with lower accumulated progress. Finally, this dissertation tackles inter-application contention in the IBM POWER8 system with a symbiotic scheduler that addresses overall SMT interference. The symbiotic scheduler uses an SMT interference model, based on CPI stacks, that estimates the slowdown of any combination of applications if they are scheduled on the same SMT core. The number of possible schedules, however, grows too fast with the number of applications and makes unfeasible to explore all possible combinations. To overcome this issue, the symbiotic scheduler models the scheduling problem as a graph problem, which allows finding the optimal schedule in reasonable time. In summary, this thesis addresses contention in the shared resources of the memory hierarchy and SMT cores of multicore processors. We identify the main contention points of three systems with different architectures and propose scheduling algorithms to tackle contention at these points. The evaluation on the real systems shows the benefits of the proposed algorithms. The symbiotic scheduler improves system throughput by 6.7\% over Linux. Regarding fairness, the proposed progress-aware scheduler reduces Linux unfairness to a third. Besides, since the proposed algorithm are completely software-based, they could be incorporated as scheduling policies in Linux and used in small-scale servers to achieve the mentioned benefits. / La actual era multinúcleo y la futura era manycore/manythread generan grandes retos en el área de la computación incluyendo, entre otros, la programación paralela productiva o la gestión eficiente de la energía. El último objetivo es alcanzar las mayores prestaciones limitando el consumo energético y garantizando una ejecución confiable. El incremento del número de contextos hardware de los sistemas hace que el planificador se convierta en un componente importante para lograr este objetivo debido a que existen múltiples formas diferentes de planificar las aplicaciones, cada una con distintas prestaciones debido a las interferencias que se producen entre las aplicaciones. Seleccionar la planificación óptima puede proporcionar importantes mejoras de prestaciones. Esta tesis se ocupa de las interferencias entre aplicaciones, cubriendo los problemas que causan en las prestaciones y equidad de los sistemas actuales. El estudio empieza con procesadores multinúcleo monohilo (Intel Xeon X3320), sigue con multinúcleos con soporte para la ejecución simultanea (SMT) de dos hilos (Intel Xeon E5645), y llega al procesador que actualmente soporta un mayor número de hilos por núcleo (IBM POWER8). La disertación analiza los principales puntos de contención en cada plataforma y propone algoritmos de planificación que mitigan las interferencias que se generan en cada uno de ellos para mejorar la productividad y equidad de los sistemas. En primer lugar, analizamos la contención a lo largo de la jerarquía de memoria. Los estudios realizados revelan la alta degradación de prestaciones provocada por la contención en memoria principal y en cualquier cache compartida. Para mitigar esta contención, proponemos diversos algoritmos de planificación cuya idea principal es distribuir los accesos a memoria a lo largo del tiempo de ejecución de la carga y las peticiones a las caches entre las diferentes caches compartidas en cada nivel. Las altas interferencias que sufren las aplicaciones que se ejecutan simultáneamente en un núcleo SMT, sin embargo, no solo afectan a las prestaciones, sino que también pueden comprometer la equidad del sistema. En esta tesis, también abordamos la equidad en los actuales multinúcleos SMT. Para mejorarla, diseñamos algoritmos de planificación que estiman el progreso de las aplicaciones en tiempo de ejecución, lo que permite priorizar los procesos con menor progreso acumulado para reducir la inequidad. Finalmente, la tesis se centra en la contención entre aplicaciones en el sistema IBM POWER8 con un planificador simbiótico que aborda la contención en todo el núcleo SMT. El planificador simbiótico utiliza un modelo de interferencia basado en pilas de CPI que predice las prestaciones para la ejecución de cualquier combinación de aplicaciones en un núcleo SMT. El número de posibles planificaciones, no obstante, crece muy rápido y hace inviable explorar todas las posibles combinaciones. Por ello, el problema de planificación se modela como un problema de teoría de grafos, lo que permite obtener la planificación óptima en un tiempo razonable. En resumen, esta tesis aborda la contención en los recursos compartidos en la jerarquía de memoria y el núcleo SMT de los procesadores multinúcleo. Identificamos los principales puntos de contención de tres sistemas con diferentes arquitecturas y proponemos algoritmos de planificación para mitigar esta contención. La evaluación en sistemas reales muestra las mejoras proporcionados por los algoritmos propuestos. Así, el planificador simbiótico mejora la productividad, en promedio, un 6.7% con respecto a Linux. En cuanto a la equidad, el planificador que considera el progreso consigue reducir la inequidad de Linux a una tercera parte. Además, dado que los algoritmos propuestos son completamente software, podrían incorporarse como políticas de planificación en Linux y usarse en servidores a pequeña escala para obtener los benefi / L'actual era multinucli i la futura era manycore/manythread generen grans reptes en l'àrea de la computació incloent, entre d'altres, la programació paral·lela productiva o la gestió eficient de l'energia. L'últim objectiu és assolir les majors prestacions limitant el consum energètic i garantint una execució confiable. L'increment del número de contextos hardware dels sistemes fa que el planificador es convertisca en un component important per assolir aquest objectiu donat que existeixen múltiples formes distintes de planificar les aplicacions, cadascuna amb unes prestacions diferents degut a les interferències que es produeixen entre les aplicacions. Seleccionar la planificació òptima pot donar lloc a millores importants de les prestacions. Aquesta tesi s'ocupa de les interferències entre aplicacions, cobrint els problemes que provoquen en les prestacions i l'equitat dels sistemes actuals. L'estudi comença amb processadors multinucli monofil (Intel Xeon X3320), segueix amb multinuclis amb suport per a l'execució simultània (SMT) de dos fils (Intel Xeon E5645), i arriba al processador que actualment suporta un major nombre de fils per nucli (IBM POWER8). Aquesta dissertació analitza els principals punts de contenció en cada plataforma i proposa algoritmes de planificació que aborden les interferències que es generen en cadascun d'ells per a millorar la productivitat i l'equitat dels sistemes. En primer lloc, estudiem la contenció al llarg de la jerarquia de memòria en els processadors multinucli. Els estudis realitzats revelen l'alta degradació de prestacions provocada per la contenció en memòria principal i en qualsevol cache compartida. Per a mitigar la contenció, proposem diversos algoritmes de planificació amb la idea principal de distribuir els accessos a memòria al llarg del temps d'execució de la càrrega i les peticions a les caches entre les diferents caches compartides en cada nivell. Les altes interferències que sofreixen las aplicacions que s'executen simultàniament en un nucli SMT, no obstant, no sols afecten a las prestacions, sinó que també poden comprometre l'equitat del sistema. En aquesta tesi, també abordem l'equitat en els actuals multinuclis SMT. Per a millorar-la, dissenyem algoritmes de planificació que estimen el progrés de les aplicacions en temps d'execució, el que permet prioritzar els processos amb menor progrés acumulat para a reduir la inequitat. Finalment, la tesi es centra en la contenció entre aplicacions en el sistema IBM POWER8 amb un planificador simbiòtic que aborda la contenció en tot el nucli SMT. El planificador simbiòtic utilitza un model d'interferència basat en piles de CPI que prediu les prestacions per a l'execució de qualsevol combinació d'aplicacions en un nucli SMT. El nombre de possibles planificacions, no obstant, creix molt ràpid i fa inviable explorar totes les possibles combinacions. Per resoldre aquest contratemps, el problema de planificació es modela com un problema de teoria de grafs, la qual cosa permet obtenir la planificació òptima en un temps raonable. En resum, aquesta tesi aborda la contenció en els recursos compartits en la jerarquia de memòria i el nucli SMT dels processadors multinucli. Identifiquem els principals punts de contenció de tres sistemes amb diferents arquitectures i proposem algoritmes de planificació per a mitigar aquesta contenció. L'avaluació en sistemes reals mostra les millores proporcionades pels algoritmes proposats. Així, el planificador simbiòtic millora la productivitat una mitjana del 6.7% respecte a Linux. Pel que fa a l'equitat, el planificador que considera el progrés aconsegueix reduir la inequitat de Linux a una tercera part. A més, donat que els algoritmes proposats son completament software, podrien incorporar-se com a polítiques de planificació en Linux i emprar-se en servidors a petita escala per obtenir els avantatges mencionats. / Feliu Pérez, J. (2017). Contention-Aware Scheduling for SMT Multicore Processors [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/79081 / Premios Extraordinarios de tesis doctorales
63

Static analysis of program by Abstract Interpretation and Decision Procedures / Analyse statique par interprétation abstraite et procédures de décision

Henry, Julien 13 October 2014 (has links)
L'analyse statique de programme a pour but de prouver automatiquement qu'un programme vérifie certaines propriétés. L'interprétation abstraite est un cadre théorique permettant de calculer des invariants de programme. Ces invariants sont des propriétés sur les variables du programme vraies pour toute exécution. La précision des invariants calculés dépend de nombreux paramètres, en particulier du domaine abstrait et de l'ordre d'itération utilisés pendant le calcul d'invariants. Dans cette thèse, nous proposons plusieurs extensions de cette méthode qui améliorent la précision de l'analyse.Habituellement, l'interprétation abstraite consiste en un calcul de point fixe d'un opérateur obtenu après convergence d'une séquence ascendante, utilisant un opérateur appelé élargissement. Le point fixe obtenu est alors un invariant. Il est ensuite possible d'améliorer cet invariant via une séquence descendante sans élargissement. Nous proposons une méthode pour améliorer un point fixe après la séquence descendante, en recommençant une nouvelle séquence depuis une valeur initiale choisie judiscieusement. L'interprétation abstraite peut égalementêtre rendue plus précise en distinguant tous les chemins d'exécution du programme, au prix d'une explosion exponentielle de la complexité. Le problème de satisfiabilité modulo théorie (SMT), dont les techniques de résolution ont été grandement améliorée cette décennie, permettent de représenter ces ensembles de chemins implicitement. Nous proposons d'utiliser cette représentation implicite à base de SMT et de les appliquer à des ordres d'itération de l'état de l'art pour obtenir des analyses plus précises.Nous proposons ensuite de coupler SMT et interprétation abstraite au sein de nouveaux algorithmes appelés Modular Path Focusing et Property-Guided Path Focusing, qui calculent des résumés de boucles et de fonctions de façon modulaire, guidés par des traces d'erreur. Notre technique a différents usages: elle permet de montrer qu'un état d'erreur est inatteignable, mais également d'inférer des préconditions aux boucles et aux fonctions.Nous appliquons nos méthodes d'analyse statique à l'estimation du temps d'exécution pire cas (WCET). Dans un premier temps, nous présentons la façon d'exprimer ce problème via optimisation modulo théorie, et pourquoi un encodage naturel du problème en SMT génère des formules trop difficiles pour l'ensemble des solveurs actuels. Nous proposons un moyen simple et efficace de réduire considérablement le temps de calcul des solveurs SMT en ajoutant aux formules certaines propriétés impliquées obtenues par analyse statique. Enfin, nous présentons l'implémentation de Pagai, un nouvel analyseur statique pour LLVM, qui calcule des invariants numériques grâce aux différentes méthodes décrites dans cette thèse. Nous avons comparé les différentes techniques implémentées sur des programmes open-source et des benchmarks utilisés par la communauté. / Static program analysis aims at automatically determining whether a program satisfies some particular properties. For this purpose, abstract interpretation is a framework that enables the computation of invariants, i.e. properties on the variables that always hold for any program execution. The precision of these invariants depends on many parameters, in particular the abstract domain, and the iteration strategy for computing these invariants. In this thesis, we propose several improvements on the abstract interpretation framework that enhance the overall precision of the analysis.Usually, abstract interpretation consists in computing an ascending sequence with widening, which converges towards a fixpoint which is a program invariant; then computing a descending sequence of correct solutions without widening. We describe and experiment with a method to improve a fixpoint after its computation, by starting again a new ascending/descending sequence with a smarter starting value. Abstract interpretation can also be made more precise by distinguishing paths inside loops, at the expense of possibly exponential complexity. Satisfiability modulo theories (SMT), whose efficiency has been considerably improved in the last decade, allows sparse representations of paths and sets of paths. We propose to combine this SMT representation of paths with various state-of-the-art iteration strategies to further improve the overall precision of the analysis.We propose a second coupling between abstract interpretation and SMT in a program verification framework called Modular Path Focusing, that computes function and loop summaries by abstract interpretation in a modular fashion, guided by error paths obtained with SMT. Our framework can be used for various purposes: it can prove the unreachability of certain error program states, but can also synthesize function/loop preconditions for which these error states are unreachable.We then describe an application of static analysis and SMT to the estimation of program worst-case execution time (WCET). We first present how to express WCET as an optimization modulo theory problem, and show that natural encodings into SMT yield formulas intractable for all current production-grade solvers. We propose an efficient way to considerably reduce the computation time of the SMT-solvers by conjoining to the formulas well chosen summaries of program portions obtained by static analysis.We finally describe the design and the implementation of Pagai,a new static analyzer working over the LLVM compiler infrastructure,which computes numerical inductive invariants using the various techniques described in this thesis.Because of the non-monotonicity of the results of abstract interpretation with widening operators, it is difficult to conclude that some abstraction is more precise than another based on theoretical local precision results. We thus conducted extensive comparisons between our new techniques and previous ones, on a variety of open-source packages and benchmarks used in the community.
64

Test symbolique de services web composite / Symbolic Testing Approach of Composite Web Services

Bentakouk, Lina 16 December 2011 (has links)
L’acceptation et l’utilisation des services Web en industrie se développent de par leursupport au développement d’application distribuées comme compositions d’entitéslogicielles plus simples appelées services. En complément à la vérification, le testpermet de vérifier la correction d’une implémentation binaire (code source nondisponible) par rapport à une spécification. Dans cette thèse, nous proposons uneapproche boîte-noire du test de conformité de compositions de services centralisées(orchestrations). Par rapport à l’état de l’art, nous développons une approchesymbolique de façon à éviter des problèmes d’explosion d’espace d’état dus à la largeutilisation de données XML dans les services Web. Cette approche est basée sur desmodèles symboliques (STS), l’exécution symbolique de ces modèles et l’utilisationd’un solveur SMT. De plus, nous proposons une approche de bout en bout, quiva de la spécification à l’aide d’un langage normalisé d’orchestration (ABPEL) etde la possible description d’objectifs de tests à la concrétisation et l’exécution enligne de cas de tests symboliques. Un point important est notre transformation demodèle entre ABPEL et les STS qui prend en compte les spécifications sémantiquesd’ABPEL. L’automatisation de notre approche est supportée par un ensemble d’outilsque nous avons développés. / Web services are gaining industry-wide acceptance and usage by fostering the developmentof distributed applications out of the composition of simpler entities calledservices. In complement to verification, testing allows one to check for the correctnessof a binary (no source code) service implementation with reference to a specification.In this thesis, we propose black box conformance testing approach for centralizedservice compositions (orchestrations). With reference to the state of the art, wedevelop a symbolic approach in order to avoid state space explosion issues due to theXML data being largely used in Web services. This approach is based on symbolicmodels (STS), symbolic execution, and the use of a satisfiability modulo theory(SMT) solver. Further, we propose a comprehensive end-to-end approach that goesfrom specification using a standard orchestration language (ABPEL), and the possibledescription of test purposes, to the online realization and execution of symbolic testcases against an implementation. A crucial point is a model transformation fromABPEL to STS that we have defined and that takes into account the peculiarities ofABPEL semantics. The automation of our approach is supported by a tool-chainthat we have developed.
65

Stratégies de génération de tests à partir de modèles UML/OCL interprétés en logique du premier ordre et système de contraintes. / Test generation strategies from UML/OCL models interpreted with first order logic constraints system

Cantenot, Jérôme 13 November 2013 (has links)
Les travaux présentés dans cette thèse proposent une méthode de génération automatique de tests à partir de modèles.Cette méthode emploie deux langages de modélisations UML4MBT et OCL4MBT qui ont été spécifiquement dérivées d’ UML et OCL pour la génération de tests. Ainsi les comportements, la structure et l’état initial du système sont décrits au travers des diagrammes de classes, d’objets et d’états-transitions.Pour générer des tests, l’évolution du modèle est représente sous la forme d’un système de transitions. Ainsi la construction de tests est équivalente à la découverte de séquences de transitions qui relient l’´état initial du système à des états validant les cibles de test.Ces séquences sont obtenues par la résolution de scénarios d’animations par des prouveurs SMT et solveurs CSP. Pour créer ces scénarios, des méta-modèles UML4MBT et CSP4MBT regroupant formules logiques et notions liées aux tests ont été établies pour chacun des outils.Afin d’optimiser les temps de générations, des stratégies ont été développé pour sélectionner et hiérarchiser les scénarios à résoudre. Ces stratégies s’appuient sur la parallélisation, les propriétés des solveurs et des prouveurs et les caractéristiques de nos encodages pour optimiser les performances. 5 stratégies emploient uniquement un prouveur et 2 stratégies reposent sur une collaboration du prouveur avec un solveur.Finalement l’intérêt de cette nouvelle méthode à été validée sur des cas d’études grâce à l’implémentation réalisée. / This thesis describes an automatic test generation process from models.This process uses two modelling languages, UML4MBT and OCL4MBT, created specificallyfor tests generation. Theses languages are derived from UML and OCL. Therefore the behaviours,the structure and the initial state of the system are described by the class diagram, the objectdiagram and the state-chart.To generate tests, the evolution of the model is encoded with a transition system. Consequently,to construct a test is to find transition sequences that rely the initial state of the system to thestates described by the test targets.The sequence are obtained by the resolution of animation scenarios. This resolution is executedby SMT provers and CSP solvers. To create the scenario, two dedicated meta-models, UML4MBTand CSP4MBT have been established. Theses meta-models associate first order logic formulas withthe test notions.7 strategies have been developed to improve the tests generation time. A strategy is responsiblefor the selection and the prioritization of the scenarios. A strategy is built upon the properties ofthe solvers and provers and the specification of our encoding process. Moreover the process canalso be paralleled to get better performance. 5 strategies employ only a prover and 2 make theprover collaborate with a solver.Finally the interest of this process has been evaluated through a list of benchmark on variouscases studies.
66

Quantitative Study of Membrane Nano-organization by Single Nanoparticle Imaging / Etude quantitative de la Nano-organisation Membranaire par Imagerie Simple de Nanoparticules

Yu, Chao 24 July 2019 (has links)
La nano-organisation de la membrane cellulaire est essentielle à la régulation de certaines fonctions cellulaires. Dans cette thèse, les récepteurs EGF, CPεT et de la transferrine ont été marqués avec des nanoparticules luminescentes et ont été suivis à la fois dans leur environnement local dans la membrane cellulaire vivantes pour de longues durées et sous un flux hydrodynamique. Nous avons alors appliqué des techniques d'inférence bayésienne, d’arbre de décision et de clustering de données extraire des informations quantitatives sur les paramètres caractéristiques du mouvement des récepteurs, notamment la forme de leur confinement dans des microdomaines. L’application d’une force hydrodynamique sur les nanoparticules nous a alors permis de sonder les interactions auxquelles ces récepteurs sont soumis. Nous avons appliqué cette approche in vitro pour favoriser et mesurer la dissociation in vitro de paires récepteur / ligand à haute affinité entre des récepteurs membranaires et leurs ligands pharmaceutiques, telles que HB-EGF et DTR et l’avons ensuite appliqué à l’étude d’interactions à la membrane cellulaire. Nous avons ainsi mis en évidence trois modes différents d'organisation de la membrane et de confinement des récepteurs: le confinement de CPεTR est déterminé par l'interaction entre les récepteurs et les constituants lipidiques / protéiques des microdomaines, le potentiel de confinement de l'EGFR résulte de l'interaction avec les lipides et les protéines de l’environnement du radeau et de l’interaction avec la F-actine; les récepteurs de la transferrine diffusent librement dans la membrane, uniquement limités stériquement par des barrières d’actine, selon le modèle ‘picket-and-fence’. Nous avons de plus montré que les nanodomaines de type radeau sont rattachés au cytoskelette d’actine. Ce travail présente donc à la fois un aperçu quantitatif du récepteur membranaire, des mécanismes d’organisation à l’échelle nanométrique, et établit un cadre méthodologique avec lequel différents types de propriétés membranaires peuvent être étudiés. / In this thesis, EGF, CPεT and transferrin receptors were labeled with luminescent nanoparticles, , and were tracked both in their local environment in the cell membrane and under a hydrodynamic flow. Bayesian inference, Bayesian decision tree, and data clustering techniques can then be applied to obtain quantitative information on the receptor motion parameters. Furthermore, we introduced hydrodynamic force application in vitro to study biomolecule dissociation between membrane receptors and their pharmaceutical ligands in high affinity receptor- ligand pairs, such as HB-EGF and DTR. Finally, three different modes of membrane organization and receptor confinement were revealed: the confinement of CPεTR is determined by the interaction between the receptors and the lipid/protein constituents of the raft; the confining potential of EGFR results from the interaction with lipids and proteins of the raft environment and from the interaction with F-actin; transferrin receptors diffuse freely in the membrane, only sterically limited by actin barriers, according to the “picket-and-fence” model. We moreover showed that all raft nanodomains are attached to the actin cytoskeleton.
67

LF-PKI: Practical, Secure, and High-Performance Design and Implementation of a Lite Flexible PKI / LF-PKI: Praktisk, säker och Högpresterande design och Implementering av Lite Flexible PKI

Xu, Yongzhe January 2022 (has links)
Today’s Web Public Key Infrastructure (PKI) builds on a homogeneous trust model. All Certificate Authorities (CAs) are equally trusted once they are marked as trusted CAs on the client side. As a result, the security of the Web PKI depends on the weakest CA. Trust heterogeneity and flexibility can be introduced in today’s Web PKI to mitigate the problem. Each client could have different levels of trust in each trusted CA, according to the properties of each CA, such as the location, reputation and scale of the CA. As a result, the loss caused by the compromise of a less trusted CA will be relieved. In this work, we study Flexible-PKI (F-PKI), which is an enhancement of Web PKI, and propose Lite Flexible-PKI (LF-PKI) to address the limitations of F-PKI. LF-PKI is designed to securely and efficiently manage domain policies and enable trust heterogeneity on the client side. The domain owner can issue domain policies for their domains, and the client will have a complete view of the domain policies issued for a specific domain. Based on the collection of domain policies from LF-PKI, trust heterogeneity can be achieved on the client side. Each client will choose the domain policies based on the trust levels of the CA. On the basis of the LF-PKI design, a high-performance implementation of LF-PKI was developed, optimized, and analyzed. The optimized implementation can provide the LF-PKI services for worldwide domains on a single server with moderate hardware. / Dagens Web Public Key Infrastructure (PKI) bygger på en homogen förtroendemodell. Alla certifikatutfärdare (CA) är lika betrodda när de är markerade som betrodda certifikatutfärdare på klientsidan. Som ett resultat beror säkerheten för webb-PKI på den svagaste CA. Förtroendeheterogenitet och flexibilitet kan införas i dagens webb-PKI för att mildra problemet. Varje klient kan ha olika nivåer av förtroende för varje betrodd certifikatutfärdare, beroende på egenskaperna hos varje certifikatutfärdare, såsom certifikatutfärdarens plats, rykte och omfattning. Som ett resultat kommer förlusten som orsakats av kompromissen av en mindre pålitlig CA att avhjälpas. I detta arbete studerar vi Flexible-PKI (F-PKI), som är en förbättring av webb-PKI, och föreslår Lite Flexible-PKI (LF-PKI) för att ta itu med begränsningarna hos F-PKI. LF-PKI är utformad för att säkert och effektivt hantera domänpolicyer och möjliggöra förtroendeheterogenitet på klientsidan. Domänägaren kan utfärda domänpolicyer för sina domäner, och klienten kommer att ha en fullständig bild av domänpolicyerna som utfärdats för en specifik domän. Baserat på insamlingen av domänpolicyer från LF-PKI kan förtroendeheterogenitet uppnås på klientsidan. Varje klient kommer att välja domänpolicyer baserat på förtroendenivåerna för CA. På basis av LF-PKI-designen utvecklades, optimerades och analyserades en högpresterande implementering av LF-PKI. Den optimerade implementeringen kan tillhandahålla LF-PKI-tjänster för världsomspännande domäner på en enda server med måttlig hårdvara.
68

Lagerplats och logistiklösningar / Stock location and logistic solution

Anongdeth, Alexander, Mikaelsson, Oskar January 2021 (has links)
Examensarbetet utfördes under våren 2021 på ett svenskt företag i Centrala Sverige. Uppdraget har som syfte haft att få en objektiv inblick i företagets lagersituation för försäljningen i EMEA. Uppdragets mål var att undersöka vart kostnadsbesparingar kunde göras, hur koldioxidutsläpp kunde reduceras för transporter av SMT produkter och hur servicenivån till kunderna kunde öka med att korta ned interna och externa ledtider. För att möjliggöra dessa kriterier undersöktes den befintliga lagersituationen mot en omställning för att komma närmare kunderna i EMEA. Med syfte att minska på transportdistanserna från lager till kund och från produktion till lager. För att möjliggöra arbete har information givits av företaget där ansvariga personer för respektive avdelning och en förstudie om vart ett lager bör placeras. Förstudien har jämförts mot den interna datainsamlingen. Interndatasamlingen har bestått av intervjuer och analyser utav tidigare arbeten i form av säljvolym och kundlokalisation granskats för att få en inblick i hur materialet flödar vid försäljning i EMEA. Logistiken för nuläget har gett en djupare förståelse i hur problemet uppstått och varför förbättringar önskats. När slut produktionen ligger närmare slutkunden än vad det befintliga central lagret gör i dagensläge antas att förbättringar kan göras. Med en granskning av företagets transportkostnader, lagerkostnader och distansen till samtliga kunder har resultatet visat på att vinningar kan göras på samtliga punkter. / This thesis project was carried out in the spring of 2021 at a Swedish owned company located in central Sweden. The purpose of the assignment was to gain an objective insight into the company’s stock situation for sales in EMEA. Then to investigate where cost savings could be made, how carbon dioxide emissions could be reduced for transport of SMT, and finally how the level of service to customers could increase by shorting internal and external lead times. To enable these criteria, the existing stock situation was examined towards a changeover to get closer to the customers in EMEA. Thus aiming to reduce the transport distance from warehouse to customer and from production to warehouse. To enable this project, information has been provided by the people responsible for each department and a feasibility study was carried out on where the warehouse should be located. The feasibility study has been compared with the internal data collection. This collection has consisted of interviews and analyzes of previous work in the form of sales volume. Customer locations has also been examined to get an insight into how the material flows during sales in EMEA. The logistics for the current situation provide a deeper understanding of how the problem arose and why improvements were needed. When the finishing production is closer to the customers of the company than what the existing warehouse does in the current situation, it is assumed that improvements can be made. With a careful examination of the company’s transport costs, warehouse costs and the distance to all customers in EMEA, the results have shown that gains can be made on all points.
69

The role of school management in promoting healthy learning environments for Grade R learners / Mamotsekua Gladys Kolokoto

Kolokoto, Mamotsekua Gladys January 2014 (has links)
The main aim of this study was to investigate the role of school management in promoting healthy school environments for Grade R learners in the Sedibeng West District. A literature review revealed that there are two types of health programmes: those that support the curriculum and those that are part of the curriculum. School managers have to focus on both in their efforts to promote health in schools. South African schools adapted a Whole School Approach in creating and sustaining healthy environments. Whole School Approach includes the development of health policies, health education, community, learner, teacher involvement, nutrition and prevention of communicable diseases. A qualitative research approach was used and data was generated by means of interviews, documents, photographs and narratives. Four research sites were purposefully selected and four principals, three Heads of Departments for Foundation Phase, four health coordinators and four Grade R practitioners participated in this research. Only one of the research sites had a School Based Health Centre. The study revealed that curriculum-based health programmes including physical education, physical activities and health education were effectively implemented although they were not effectively monitored and evaluated. Health programmes supporting the curriculum include nutrition, first aid and health services. Both health services and nutrition were effectively implemented and monitored whilst there were serious problems with first aid. Practitioners were not trained for first aid, in the three schools where first aid kits were available were not checked therefore not replenished. In one school there was no first aid kit, thus, there was not much focus on precautionary measures in the participating school. There was therefore, no strategies in place for the management of health programmes that support the curriculum. / MEd (Education Management), North-West University, Vaal Triangle Campus, 2014
70

The role of school management in promoting healthy learning environments for Grade R learners / Mamotsekua Gladys Kolokoto

Kolokoto, Mamotsekua Gladys January 2014 (has links)
The main aim of this study was to investigate the role of school management in promoting healthy school environments for Grade R learners in the Sedibeng West District. A literature review revealed that there are two types of health programmes: those that support the curriculum and those that are part of the curriculum. School managers have to focus on both in their efforts to promote health in schools. South African schools adapted a Whole School Approach in creating and sustaining healthy environments. Whole School Approach includes the development of health policies, health education, community, learner, teacher involvement, nutrition and prevention of communicable diseases. A qualitative research approach was used and data was generated by means of interviews, documents, photographs and narratives. Four research sites were purposefully selected and four principals, three Heads of Departments for Foundation Phase, four health coordinators and four Grade R practitioners participated in this research. Only one of the research sites had a School Based Health Centre. The study revealed that curriculum-based health programmes including physical education, physical activities and health education were effectively implemented although they were not effectively monitored and evaluated. Health programmes supporting the curriculum include nutrition, first aid and health services. Both health services and nutrition were effectively implemented and monitored whilst there were serious problems with first aid. Practitioners were not trained for first aid, in the three schools where first aid kits were available were not checked therefore not replenished. In one school there was no first aid kit, thus, there was not much focus on precautionary measures in the participating school. There was therefore, no strategies in place for the management of health programmes that support the curriculum. / MEd (Education Management), North-West University, Vaal Triangle Campus, 2014

Page generated in 0.1178 seconds