• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 305
  • 191
  • 169
  • 69
  • 51
  • 44
  • 23
  • 17
  • 9
  • 9
  • 7
  • 7
  • 6
  • 5
  • 5
  • Tagged with
  • 1000
  • 213
  • 165
  • 152
  • 105
  • 83
  • 81
  • 80
  • 68
  • 68
  • 62
  • 60
  • 57
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Softwarová podpora podnikových procesů / Software Support of an Enterprise Processes

Novák, Pavel January 2013 (has links)
This Master’s Thesis Focuses on Problematics of Business Process Management in Manufacturing and Trading Company Utilizing Software Products. The Result of the Thesis Is a Design of Changes related to Management of the Current Processes and Selection of Enterprise Information System to Support Managerial Decisions on Tactical and Operational Level. Evaluation of Economic Effects of the Investment into the Information System is a Part of the Designs.
432

Hodnocení finanční výkonnosti společnosti prostřednictvím benchmarkingu / Evaluation of the Company´s Financial Performance Using Benchmarkingu Approach

Vaněček, Martin January 2016 (has links)
This master´s thesis deals with the topic of benchmarking as part of a strategic procedure in the area of financial management performance. The goal of this thesis is to evaluate relative to benchmarking the financial performance of the company MESIT foundry, Inc. The theoretical part describes the basic assumptions of strategic controls and the company (business) performance, the main focus is devoted to benchmarking and financial analysis. The practical part contains a presentation of results and the evaluation of a benchmarking study focused on comparing the results of the financial performance of the monitored company MESIT foundry, Inc. to its competition of four companies which are comparable in size in the industry of Castings production from light nonferrous metals. This thesis covers the time period from 2010 to 2014. The study results show that the company MESIT foundry, Inc. has a consistently very good level of relative financial performance under the basic indicators of financial analysis, two are in the operating indicators and one indicator is of profitability. A negative finding is the decline in relative financial performance in 2014, all by profitability indicators with the exception of the indicator of return on sales. The recommendations for improvement of the financial performance of the analyzed company based on the obtained findings are presented at the end of this thesis.
433

The Hare, the Tortoise and the Fox : Extending Anti-Fuzzing

Dewitz, Anton, Olofsson, William January 2022 (has links)
Background. The goal of our master's thesis is to reduce the effectiveness of fuzzers using coverage accounting. The method we chose to carry out our goal is based on how the coverage accounting in TortoiseFuzz rates code paths to find memory corruption bugs. It simply looks for functions that tend to cause vulnerabilities and considers more to be better. Our approach is to insert extra function calls to these memory functions inside fake code paths generated by anti-fuzzing. Objectives. Our thesis researches the current anti-fuzzing techniques to figure out which tool to extend with our counter to coverage accounting. We conduct an experiment where we run several fuzzers on different benchmark programs to evaluate our tool. Methods. The foundation for the anti-fuzzing tool will be obtained by conducting a literature review, to evaluate current anti-fuzzing techniques, and how coverage accounting prioritizes code paths. Afterward, an experiment will be conducted to evaluate the created tool. To evaluate fuzzers the FuzzBench platform will be used, a homogeneous test environment that allows future research to easier compare to old research using a standard platform. Benchmarks representative of real-world applications will be chosen from within this platform. Each benchmark will be executed in three versions, the original, one protected by a prior anti-fuzzing tool, and one protected by our new anti-fuzzing tool. Results. This experiment showed that our anti-fuzzing tool successfully lowered the number of unique found bugs by TortoiseFuzz, even when the benchmark is protected by a prior developed anti-fuzzing tool. Conclusions. We can conclude, based on our results, that our tool shows promise against a fuzzer using coverage accounting. Further study will push fuzzers to become even better to overcome new anti-fuzzing methods. / Bakgrund. Målet med vår masteruppsats är att försöka reducera effektiviteten hos fuzzers som använder sig av täckningsrapportering (coverage accounting). Metoden vi använde för att genomföra vårt mål baserades på hur täckningsrapportering i TortoiseFuzz betygsätter kodvägar för att hitta minneskorruptionsbuggar. Den letar helt enkelt efter funktioner som tenderar att orsaka sårbarheter och anser att fler är bättre. Vår idé var att föra in extra funktionsanrop till dessa minnesfunktioner inuti de fejkade kodgrenarna som blivit genererade av anti-fuzzningen. Syfte. Vår uppsats undersöker nuvarande anti-fuzzningstekniker för att evaluera vilket verktyg som vår kontring mot täckningsrapportering ska baseras på. Vi utför ett experiment där vi kör flera fuzzers på olika riktmärkesprogram för att utvärdera vårt verktyg. Metod. Den teoretiska grunden för anti-fuzzningsverktyget erhålls genom genomförandet av en litteraturstudie, med syfte att evaluera befintliga tekniker inom anti-fuzzning, och erhålla förståelse över hur täckningsrapportering prioriterar kodgrenar. Därefter kommer ett experiment att genomföras för att evaluera det framtagna verktyget. För att sedan evaluera vårt verktyg mot TortoiseFuzz kommer FuzzBench att användas, en homogen testmiljö utformad för att evaluera och jämföra fuzzers mot varandra. Den är utformad för att underlätta för vidare forskning, där reproduktion av ett experiment är enkelt, och resultat från tidigare forskning går att enkelt slå samman. Riktmärkesprogrammen som är representativa av verkliga program kommer väljas i denna plattform. Varje riktmärkesprogram kommer bli kopierad i tre versioner, originalet, ett som är skyddat av ett tidigare anti-fuzzningsverktyg, och ett skyddat av vårt nya anti-fuzzningsverktyg. Resultat. Detta experiment visade att vårt anti-fuzzningsverktyg framgångsrikt sänkte antalet unika funna buggar av TortoiseFuzz, även när riktmärkesprogrammen skyddades av ett tidigare anti-fuzzningsverktyg. Slutsatser. Vi drar slutsatsen, baserat på våra resultat, att vårt verktyg ser lovande ut mot en fuzzer som använder täckningsrapportering. Vidare studier kommer trycka på utvecklingen av fuzzers att bli ännu bättre för att överkomma nya anti-fuzzing-metoder.
434

Detecting Memory-Boundedness with Hardware Performance Counters

Molka, Daniel, Schöne, Robert, Hackenberg, Daniel, Nagel, Wolfgang E. 23 April 2019 (has links)
Modern processors incorporate several performance monitoring units, which can be used to count events that occur within different components of the processor. They provide access to information on hardware resource usage and can therefore be used to detect performance bottlenecks. Thus, many performance measurement tools are able to record them complementary to information about the application behavior. However, the exact meaning of the supported hardware events is often incomprehensible due to the system complexity and partially lacking or even inaccurate documentation. For most events it is also not documented whether a certain rate indicates a saturated resource usage. Therefore, it is usually diffcult to draw conclusions on the performance impact from the observed event rates. In this paper, we evaluate whether hardware performance counters can be used to measure the capacity utilization within the memory hierarchy and estimate the impact of memory accesses on the achieved performance. The presented approach is based on a small selection of micro-benchmarks that constantly stress individual components in the memory subsystem, ranging from caches to main memory. These workloads are used to identify hardware performance counters that provide good estimates for the utilization of individual components in the memory hierarchy. However, since access latencies can be interleaved with computing instructions, a high utilization of the memory hierarchy does not necessarily result in low performance. We therefore also investigate which stall counters provide good estimates for the number of cycles that are actually spent waiting for the memory hierarchy.
435

Fuzzer Test Log Analysis Using Machine Learning : Framework to analyze logs and provide feedback to guide the fuzzer

Yadav, Jyoti January 2018 (has links)
In this modern world machine learning and deep learning have become popular choice for analysis and identifying various patterns on data in large volumes. The focus of the thesis work has been on the design of the alternative strategies using machine learning to guide the fuzzer in selecting the most promising test cases. Thesis work mainly focuses on the analysis of the data by using machine learning techniques. A detailed analysis study and work is carried out in multiple phases. First phase is targeted to convert the data into suitable format(pre-processing) so that necessary features can be extracted and fed as input to the unsupervised machine learning algorithms. Machine learning algorithms accepts the input data in form of matrices which represents the dimensionality of the extracted features. Several experiments and run time benchmarks have been conducted to choose most efficient algorithm based on execution time and results accuracy. Finally, the best choice has been implanted to get the desired result. The second phase of the work deals with applying supervised learning over clustering results. The final phase describes how an incremental learning model is built to score the test case logs and return their score in near real time which can act as feedback to guide the fuzzer. / I denna moderna värld har maskininlärning och djup inlärning blivit populärt val för analys och identifiering av olika mönster på data i stora volymer. Uppsatsen har fokuserat på utformningen av de alternativa strategierna med maskininlärning för att styra fuzzer i valet av de mest lovande testfallen. Examensarbete fokuserar huvudsakligen på analys av data med hjälp av maskininlärningsteknik. En detaljerad analysstudie och arbete utförs i flera faser. Första fasen är inriktad på att konvertera data till lämpligt format (förbehandling) så att nödvändiga funktioner kan extraheras och matas som inmatning till de oövervakade maskininlärningsalgoritmerna. Maskininlärningsalgoritmer accepterar ingångsdata i form av matriser som representerar dimensionen av de extraherade funktionerna. Flera experiment och körtider har genomförts för att välja den mest effektiva algoritmen baserat på exekveringstid och resultatnoggrannhet. Slutligen har det bästa valet implanterats för att få önskat resultat. Den andra fasen av arbetet handlar om att tillämpa övervakat lärande över klusterresultat. Slutfasen beskriver hur en inkrementell inlärningsmodell är uppbyggd för att få poäng i testfallsloggarna och returnera poängen i nära realtid vilket kan fungera som feedback för att styra fuzzer.
436

Evaluating Swift concurrency on the iOS platform : A performance analysis of the task-based concurrency model in Swift 5.5 / Utvärdering av Swift concurrency på iOS-plattformen : En prestandautvärdering av den task-baserade concurrency-modellen i Swift 5.5

Kärrby, Andreas January 2022 (has links)
Due to limitations in hardware, raising processor clock speeds is no longer the primary way to increase computing performance. Instead, computing devices are equipped with multiple processors (they are multi-core) to increase performance by enabling parallel execution of code. To fully utilize all available computational power, programs need to be concurrent, i.e. be able to manage multiple tasks at the same time. To this end, programming languages and platforms often provide a concurrency model that allows developers to construct concurrent programs. These models can vary both in design and implementation. In September of 2021, a new version of the Swift programming language, most commonly used to develop mobile applications on Apple’s iOS platform, was released. This release introduced a new concurrency model, Swift concurrency (SC), featuring e.g. structured concurrency and the async/await pattern. The performance of a concurrency model is important, not the least because end users expect applications to be responsive and performant. This thesis investigates Swift’s new concurrency model from a performance perspective, comparing it to a previous model, Grand Central Dispatch (GCD). Six benchmark applications are developed and implemented in both the GCD and the Swift concurrency models. Three of the benchmarks are focused on exercising separate parts of the models in isolation. The other three use the models to solve classical computational problems: Fibonacci numbers, N-Queens problem, and matrix multiplication. A performance analysis is carried out to study the differences in execution time and memory consumption between the two models. The results show differences between the two models, especially in execution time, and indicate that neither model consistently outperforms the other. Finally, some possible avenues for future work are identified. / På grund av begränsningar i hårdvara går det inte längre att öka datorprestanda genom att enbart öka klockfrekvensen hos processorer. Datorer förses numera istället med flera processorer (s.k. multi-core) för att öka prestanda genom att möjliggöra parallell exekvering av kod. För att till fullo kunna utnyttja all tillgänglig datorkraft så måste program vara concurrent, det vill säga att de måste kunna hantera flera olika uppgifter samtidigt. För detta ändamål tillhandahåller programmeringsspråk och plattformar ofta en concurrency-modell som låter utvecklare konstruera program som är concurrent. Dessa modeller kan variera både i design och i hur de är implementerade. I september 2021 så släpptes en ny version av programmeringsspråket Swift, som främst används för att utveckla mobilapplikationer på Apples iOS-plattform. Den nya versionen introducerade en ny concurrency-modell, Swift concurrency, med bland annat strukturerad concurrency och async/await-mönstret. Prestandan i en concurrency-modell är viktig att beakta, inte minst för att användare förväntar sig att applikationer ska vara responsiva och kraftfulla. Denna studie utvärderar den nya concurrency-modellen ur ett prestandaperspektiv, och jämför den med en tidigare modell, Grand Central Dispatch (GCD). Sex stycken benchmark-applikationer skapas och implementeras i både GCD- och Swift concurrency-modellerna. Tre av våra benchmarks fokuserar på att utvärdera enskilda delar av modellerna var för sig. De andra tre använder modellerna för att lösa klassiska beräkningsproblem: Fibonacci-tal, N-Queens-problemet, och matrismultiplikation. En prestandaanalys utförs för att studera skillnaderna i exekveringstid och minnesanvändning mellan de två modellerna. Resultaten visar på skillnader mellan de två modellerna, särskilt i exekveringstid, och indikerar att ingendera modell konsekvent presterar bättre än den andra. Slutligen identifieras några möjliga vägar för framtida arbete.
437

Development of an acoustico-vestibular system for head rolling stabilization : Aperception system to stabilize a bio-inspired swimming snake robot / Utveckling av ett akustiskt-vestibulärt system för stabilisering av rullning av huvudet : Ett perceptionssystem för att stabilisera en bioinspirerad simmande ormrobot

Paul, Marie-Jeanne January 2023 (has links)
Bioinspired robots are becoming more and more common and getting inspired by the solutions found in nature to tackle problems is often useful. Regarding the swimming robots Ąeld of research, we can get inspiration from swimming snakes. In the last twenty years several snake-like swimming robots have been conceived as their slender shape can allow more freedom in movements. It also may be less intrusive in an underwater environment. One problem encountered in swimming snake robots is to stabilize it on water surface and to control the head movement. Due to its shape it is very sensitive to rolling motion. A way to counter this is inspired by the swimming of the cottonmouth snake. By rotating its modules the robot can play on the buoyancy effect and gravity to stay stabilized and this would allow to stabilize the gaze of the robot. For that however we need to be able to locate the head, relatively to the water surface to have an absolute position of each module. The goal of this thesis is to conceive a perception module that would be able to reproduce the senses a cottonmouth snake uses to stabilize itself on the water surface. In order to achieve this an Inertial Measurement Unit is used to reproduce the information a snake gets from its inner ear. Distance sensors imitate the snakeŠs vision and provide an estimate of the distance of the head from the water surface. The perception module should be able to follow the movement of the snake robotŠs head relatively to the water surface. One of the challenges is that this module should be operational speciĄcally on water. To test the performances of the conceived perception module a hardware solution is presented in this thesis to simulate the movements of the robotŠs head above a water tank. This structure allows the perception module to do simple movements such as sinusoidal waves along the z axis or the movements the robot head would do when it is slightly disturbed, according to the simulation of that robot. The results show that the perception module has an accuracy similar to biological systems (the senses of the snake) so we believe that it will be usable to control the robotŠs stability. / Bioinspirerade robotar blir allt vanligare och det är ofta bra att låta sig inspireras av lösningar som Ąnns i naturen för att lösa problem. När det gäller forskningsområdet simulerande robotar kan vi inspireras av simulerande ormar. Under de senaste tjugo åren har Ćera ormliknande simrobotar utformats eftersom deras smala form ger större rörelsefrihet. Den kan också vara mindre påträngande i en undervattensmiljö. Ett problem med simmande ormrobotar är att stabilisera dem på vattenytan och att kontrollera huvudets rörelser. På grund av sin form är den mycket känslig för rullande rörelser. Ett sätt att motverka detta är inspirerat av vattenmockasinets simning: ett projekt vid IMT Atlantique-robotlabbet går ut på att arbeta med kroppens form genom att Ćytta robotens olika moduler. Genom att rotera modulerna kan roboten spela på Ćytkraften och gravitationen för att hålla sig stabil, vilket skulle göra det möjligt att stabilisera robotens blick. För detta måste vi dock kunna lokalisera huvudet i förhållande till vattenytan för att få en absolut position för varje modul. Målet med den här avhandlingen är att utforma en perceptionsmodul som skulle kunna återge de sinnen som ett vattenmockasin använder för att stabilisera sig på vattenytan. För att uppnå detta kommer en tröghetsmätningsenhet att användas för att reproducera den information som en orm får från sitt inneröra. Sedan kommer avståndssensorer att läggas till för att efterlikna ormens syn (eftersom vi behöver en uppskattning av huvudets avstånd från vattenytan). Uppfattningsmodulen bör kunna följa ormarobotens huvudets rörelse i förhållande till vattenytan. En av utmaningarna är att denna modul ska fungera speciĄkt på vatten. För att testa den tänkta uppfattningsmodulens prestanda presenteras i denna avhandling en hårdvarulösning för att simulera robothuvudets rörelser ovanför en vattentank. Denna struktur gör det möjligt för perceptionsmodulen att göra enkla rörelser, t.ex. sinusformade vågor längs z-axeln eller de rörelser som robothuvudet skulle göra när det störs en aning, enligt simuleringen av roboten. Resultaten kan analyseras med en biologisk utgångspunkt. Resultaten visar att perceptionsmodulen har en noggrannhet som liknar biologiska system (ormens sinnen) så vi tror att den kommer att kunna användas för att kontrollera robotens stabilitet.
438

Knowledge Extraction for Hybrid Question Answering

Usbeck, Ricardo 22 May 2017 (has links) (PDF)
Since the proposal of hypertext by Tim Berners-Lee to his employer CERN on March 12, 1989 the World Wide Web has grown to more than one billion Web pages and still grows. With the later proposed Semantic Web vision,Berners-Lee et al. suggested an extension of the existing (Document) Web to allow better reuse, sharing and understanding of data. Both the Document Web and the Web of Data (which is the current implementation of the Semantic Web) grow continuously. This is a mixed blessing, as the two forms of the Web grow concurrently and most commonly contain different pieces of information. Modern information systems must thus bridge a Semantic Gap to allow a holistic and unified access to information about a particular information independent of the representation of the data. One way to bridge the gap between the two forms of the Web is the extraction of structured data, i.e., RDF, from the growing amount of unstructured and semi-structured information (e.g., tables and XML) on the Document Web. Note, that unstructured data stands for any type of textual information like news, blogs or tweets. While extracting structured data from unstructured data allows the development of powerful information system, it requires high-quality and scalable knowledge extraction frameworks to lead to useful results. The dire need for such approaches has led to the development of a multitude of annotation frameworks and tools. However, most of these approaches are not evaluated on the same datasets or using the same measures. The resulting Evaluation Gap needs to be tackled by a concise evaluation framework to foster fine-grained and uniform evaluations of annotation tools and frameworks over any knowledge bases. Moreover, with the constant growth of data and the ongoing decentralization of knowledge, intuitive ways for non-experts to access the generated data are required. Humans adapted their search behavior to current Web data by access paradigms such as keyword search so as to retrieve high-quality results. Hence, most Web users only expect Web documents in return. However, humans think and most commonly express their information needs in their natural language rather than using keyword phrases. Answering complex information needs often requires the combination of knowledge from various, differently structured data sources. Thus, we observe an Information Gap between natural-language questions and current keyword-based search paradigms, which in addition do not make use of the available structured and unstructured data sources. Question Answering (QA) systems provide an easy and efficient way to bridge this gap by allowing to query data via natural language, thus reducing (1) a possible loss of precision and (2) potential loss of time while reformulating the search intention to transform it into a machine-readable way. Furthermore, QA systems enable answering natural language queries with concise results instead of links to verbose Web documents. Additionally, they allow as well as encourage the access to and the combination of knowledge from heterogeneous knowledge bases (KBs) within one answer. Consequently, three main research gaps are considered and addressed in this work: First, addressing the Semantic Gap between the unstructured Document Web and the Semantic Gap requires the development of scalable and accurate approaches for the extraction of structured data in RDF. This research challenge is addressed by several approaches within this thesis. This thesis presents CETUS, an approach for recognizing entity types to populate RDF KBs. Furthermore, our knowledge base-agnostic disambiguation framework AGDISTIS can efficiently detect the correct URIs for a given set of named entities. Additionally, we introduce REX, a Web-scale framework for RDF extraction from semi-structured (i.e., templated) websites which makes use of the semantics of the reference knowledge based to check the extracted data. The ongoing research on closing the Semantic Gap has already yielded a large number of annotation tools and frameworks. However, these approaches are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. On the other hand, the issue of comparability of results is not to be regarded as being intrinsic to the annotation task. Indeed, it is now well established that scientists spend between 60% and 80% of their time preparing data for experiments. Data preparation being such a tedious problem in the annotation domain is mostly due to the different formats of the gold standards as well as the different data representations across reference datasets. We tackle the resulting Evaluation Gap in two ways: First, we introduce a collection of three novel datasets, dubbed N3, to leverage the possibility of optimizing NER and NED algorithms via Linked Data and to ensure a maximal interoperability to overcome the need for corpus-specific parsers. Second, we present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools and frameworks on multiple datasets. The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Moreover, the increasing the demand for natural-language interfaces as depicted by current mobile applications requires systems to deeply understand the underlying user information need. In conclusion, the natural language interface for asking questions requires a hybrid approach to data usage, i.e., simultaneously performing a search on full-texts and semantic knowledge bases. To close the Information Gap, this thesis presents HAWK, a novel entity search approach developed for hybrid QA based on combining structured RDF and unstructured full-text data sources.
439

Analyse de sécurité et QoS dans les réseaux à contraintes temporelles / Analysis of security and QoS in networks with time constraints

Mostafa, Mahmoud 10 November 2011 (has links)
Dans le domaine des réseaux, deux précieux objectifs doivent être atteints, à savoir la QoS et la sécurité, plus particulièrement lorsqu’il s’agit des réseaux à caractère critique et à fortes contraintes temporelles. Malheureusement, un conflit existe : tandis que la QoS œuvre à réduire les temps de traitement, les mécanismes de sécurité quant à eux requièrent d’importants temps de traitement et causent, par conséquent, des délais et dégradent la QoS. Par ailleurs, les systèmes temps réel, la QoS et la sécurité ont très souvent été étudiés séparément, par des communautés différentes. Dans le contexte des réseaux avioniques de données, de nombreux domaines et applications, de criticités différentes, échangent mutuellement des informations, souvent à travers des passerelles. Il apparaît clairement que ces informations présentent différents niveaux de sensibilité en termes de sécurité et de QoS. Tenant compte de cela, le but de cette thèse est d’accroître la robustesse des futures générations de réseaux avioniques de données en contrant les menaces de sécurité et évitant les ruptures de trafic de données. A cet effet, nous avons réalisé un état de l’art des mécanismes de sécurité, de la QoS et des applications à contraintes temporelles. Nous avons, ensuite étudié la nouvelle génération des réseaux avioniques de données. Chose qui nous a permis de déterminer correctement les différentes menaces de sécurité. Sur la base de cette étude, nous avons identifié à la fois les exigences de sécurité et de QoS de cette nouvelle génération de réseaux avioniques. Afin de les satisfaire, nous avons proposé une architecture de passerelle de sécurité tenant compte de la QoS pour protéger ces réseaux avioniques et assurer une haute disponibilité en faveur des données critiques. Pour assurer l’intégration des différentes composantes de la passerelle, nous avons développé une table de session intégrée permettant de stocker toutes les informations nécessaires relatives aux sessions et d’accélérer les traitements appliqués aux paquets (filtrage à états, les traductions d’adresses NAT, la classification QoS et le routage). Cela a donc nécessité, en premier lieu, l'étude de la structure existante de la table de session puis, en second lieu, la proposition d'une toute nouvelle structure répondant à nos objectifs. Aussi, avons-nous présenté un algorithme permettant l’accès et l’exploitation de la nouvelle table de session intégrée. En ce qui concerne le composant VPN IPSec, nous avons détecté que le trafic chiffré par le protocole ESP d’IPSec ne peut pas être classé correctement par les routeurs de bordure. Afin de surmonter ce problème, nous avons développé un protocole, Q-ESP, permettant la classification des trafics chiffrés et offrant les services de sécurité fournis par les protocoles AH et ESP combinés. Plusieurs techniques de gestion de bande passante ont été développées en vue d’optimiser la gestion du trafic réseau. Pour évaluer les performances offertes par ces techniques et identifier laquelle serait la plus appropriée dans notre cas, nous avons effectué une comparaison basée sur le critère du délai, par le biais de tests expérimentaux. En dernière étape, nous avons évalué et comparé les performances de la passerelle de sécurité que nous proposons par rapport à trois produits commerciaux offrant les fonctions de passerelle de sécurité logicielle en vue de déterminer les points forts et faibles de notre implémentation pour la développer ultérieurement. Le manuscrit s’organise en deux parties : la première est rédigée en français et représente un résumé détaillé de la deuxième partie qui est, quant à elle, rédigée en anglais. / QoS and security are two precious objectives for network systems to attain, especially for critical networks with temporal constraints. Unfortunately, they often conflict; while QoS tries to minimize the processing delay, strong security protection requires more processing time and causes traffic delay and QoS degradation. Moreover, real-time systems, QoS and security have often been studied separately and by different communities. In the context of the avionic data network various domains and heterogeneous applications with different levels of criticality cooperate for the mutual exchange of information, often through gateways. It is clear that this information has different levels of sensitivity in terms of security and QoS constraints. Given this context, the major goal of this thesis is then to increase the robustness of the next generation e-enabled avionic data network with respect to security threats and ruptures in traffic characteristics. From this perspective, we surveyed the literature to establish state of the art network security, QoS and applications with time constraints. Then, we studied the next generation e-enabled avionic data network. This allowed us to draw a map of the field, and to understand security threats. Based on this study we identified both security and QoS requirements of the next generation e-enabled avionic data network. In order to satisfy these requirements we proposed the architecture of QoS capable integrated security gateway to protect the next generation e-enabled avionic data network and ensure the availability of critical traffic. To provide for a true integration between the different gateway components we built an integrated session table to store all the needed session information and to speed up the packet processing (firewall stateful inspection, NAT mapping, QoS classification and routing). This necessitates the study of the existing session table structure and the proposition of a new structure to fulfill our objective. Also, we present the necessary processing algorithms to access the new integrated session table. In IPSec VPN component we identified the problem that IPSec ESP encrypted traffic cannot be classified appropriately by QoS edge routers. To overcome this problem, we developed a Q-ESP protocol which allows the classifications of encrypted traffic and combines the security services provided by IPSec ESP and AH. To manage the network traffic wisely, a variety of bandwidth management techniques have been developed. To assess their performance and identify which bandwidth management technique is the most suitable given our context we performed a delay-based comparison using experimental tests. In the final stage, we benchmarked our implemented security gateway against three commercially available software gateways. The goal of this benchmark test is to evaluate performance and identify problems for future research work. This dissertation is divided into two parts: in French and in English respectively. Both parts follow the same structure where the first is an extended summary of the second.
440

The IKEA Industry way of ergonomic risk assessment : Development of a global standard for ergonomic risk assessment

Sroka, Angelica January 2019 (has links)
In 2018 IKEA Industry, the largest producer of wood-based furniture for IKEA customers presented their sustainability strategy for Financial Year 2025. In Health & Safety, they want to minimize ergonomic risks at their factories. To be able to understand what risks the factories contain, the first step is a common ergonomic risk assessment methodology. Because of a lack of knowledge in ergonomics at IKEA Industry, the responsibility was laid on this master thesis project. This project has with the help of interviews, surveys and observations found what needs the factories have in ergonomic risk assessment. A literature review also found things that the factories should have but haven´t asked for. Using benchmarking, several common methods used on the market has been summarized and analyzed by the requirements. Three methods, KIM, RAMP and HARM were chosen to be tested by the factories. In a user test, it was clear that KIM was easiest to use. HARM was eliminated because of the lack of evaluating lifting and pushing movements. To choose between KIM and RAMP they were evaluated in terms of the requirements. The results showed that KIM was the best method for IKEA Industry factories. At some places RAMP had good assessment methods. In order to not ignore these, they have been implemented into KIM to make it suit the factories even better. The result ended up in a document called Global standard of ergonomic risk assessment. The method is divided into three different methods depending on if you have lifting/ carrying work, pushing work or repetitive work. The results are then summarized in a chart that shows what needs to be investigated. This project has also with the help of the literature and the analysis of the factory, decided which roles that will participate in the assessment. The suggestions are manager, ergonomist and a production co-worker. With the help of this method, the factories will be able to understand what ergonomic risks they have. They will only need to evaluate the work tasks with the help of this method and will then be presented all high, medium or low ergonomic risks in the factories to minimize these before FY2025. / 2018 presenterade IKEA Industry, världens största möbeltillverkare, deras hållbarhetsstrategi inför finansiella året 2025. Inom Hälsa & säkerhet vill dem minska sina ergonomiska risker på fabrikerna. För att förstå vilka risker som finns har dem kommit fram till att skapa en bedömningsmetod som är gemensam bland fabrikerna. Då företaget har en brist i kunskap inom ergonomi har företaget valt att lägga över ansvaret på detta exjobb. Detta projekt har med hjälp av intervjuer, enkäter och observationer kommit fram till vad för behov fabrikerna har vad gäller ergonomisk riskbedömning. Med litteratur har även andra behov hittats som anses behövs men fabrikerna har inte insett behovet. För att hitta metoder har det genomförts en benchmarking där flera av de mest vanliga och erkända ergonomiska riskbedömningsmetoder har sammanfattats. Dessa metoder har sedan analyserats med hjälp av behoven och KIM, RAMP och HARM blev utvalda. Dessa har sedan testats av fabrikerna med hjälp av ett användartest. Resultatet visade på att KIM var enklast att användas. HARM valdes även bort på grund av dess brist i bedömning av lyft och drag. För att kunna välja vilken metod som passar IKEA Industry bäst bedömdes KIM och RAMP med avseende på de olika krav som sattes upp. Här visade det sig att KIM var den mest lämpade metoden för IKEA Industry. Då KIM ibland hade brister i bedömningen som RAMP var bättre på valdes det att lägga in vissa delar från RAMP för att komplettera KIM. Resultatet blev ett dokument vid namn ”Global standard inom ergonomisk riskbedömning”. Metoden är uppdelad i tre olika metoder beroende på om lyftarbete, drag/skjutande arbete eller repetitivt arbete skall bedömas. Resultaten är sedan sammanfattade i en tabell som visar vilka områden som behövs undersökas. Projektet har även med hjälp av teori och analys av fabrikerna kommit fram till vilka roller som skall deltaga i ett bedömningsarbete. Förslaget blev slutligen, ansvarig chef, ergonom och en produktionsarbetare. Med hjälp av denna metod skall fabrikerna kunna förstå vad för risker dem har i fabrikerna. Dem kommer endast behöva använda dokumentet, utvärdera och sedan få information om alla höga, medium och låga risker för att minimera dessa innan FY2025.

Page generated in 0.0711 seconds