Spelling suggestions: "subject:"atemsystem 5oftware"" "subject:"atemsystem 1software""
31 |
Systém pro podporu managementu softwarových aktiv / Software Asset Management Support SystemBielik, Branislav January 2011 (has links)
This work deals with software asset management, types of managed software licenses and also the standards that are related to processes of software asset management. There are specified requirements for a system for SAM (Software Asset Management) and also the design of the system. It deals with the description of the system implementation with the chosen implementation environment, followed with testing of this system and evaluation of results.
|
32 |
Practical Exploit Mitigation Design Against Code Re-Use and System Call Abuse AttacksJelesnianski, Christopher Stanislaw 09 January 2023 (has links)
Over the years, many defense techniques have been proposed by the security community. Even so, few have been adopted by the general public and deployed in production. This limited defense deployment and weak security has serious consequences, as large scale cyber-attacks are now a common occurrence in society. One major obstacle that stands in the way is practicality, the quality of being designed for actual use or having usefulness or convenience. For example, an exploit mitigation design may be considered not practical to deploy if it imposes high performance overhead, despite offering excellent and robust security guarantees. This is because achieving hallmarks of practical design, such as minimizing adverse side-effects like performance degradation or memory monopolization, is difficult in practice, especially when trying to provide a high level of security for users.
Secure and practical exploit mitigation design must successfully navigate several challenges. To illustrate, modern-day attacks, especially code re-use attacks, understand that rudimentary defenses such as Data Execution Prevention (DEP) and Address Space Layout Randomization (ASLR) will be deployed moving forward. These attacks have therefore evolved and diversified their angles of attack to become capable of leveraging a multitude of different code components. Accordingly, the security community has uncovered these threats and maintained progress in providing possible resolutions with new exploit mitigation designs. More specifically though, defenses have had to correspondingly extend their capabilities to protect more aspects of code, leading to defense techniques becoming increasingly complex. Trouble then arises as supporting such fine-grained defenses brings inherent disadvantages such as significant hardware resource utilization that could be otherwise used for useful work. This complexity has made performance, security, and scalability all competing ideals in practical system design. At the same time, other recent efforts have implemented mechanisms with negligible performance impact, but do so at the risk of weaker security guarantees.
This dissertation first formalizes the challenges in modern exploit mitigation design. To illustrate these challenges, this dissertation presents a survey from the perspective of both attacker and defender to provide an overview of this current security landscape. This includes defining an informal taxonomy of exploit mitigation strategies, explaining prominent attack vectors that are faced by security experts today, and identifying and defining code components that are generally abused by code re-use. This dissertation then presents two practical design solutions. Both defense system designs uphold goals of achieving realistic performance, providing strong security guarantees, being robust for modern application code-bases, and being able to scale across the system at large.
The first practical exploit mitigation design this dissertation presents is MARDU. MARDU is a novel re-randomization approach that utilizes on-demand randomization and the concept of code trampolines to support sharing of code transparently system-wide. To the best of my knowledge, MARDU is the first presented re-randomization technique capable of runtime code sharing for re-randomized code system-wide. Moreover, MARDU is one of the very few re-randomization mechanisms capable of performing seamless live thread migration to newly randomized code without pausing application execution. This dissertation describes the full design, implementation, and evaluation of MARDU to demonstrate its merits and show that careful design can uphold all practical design goals. For instance, scalability is a major challenge for randomization strategies, especially because traditional OS design expects code to be placed in known locations so that it can be reached by multiple processes, while randomization is purposefully trying to achieve the opposite, being completely unpredictable. This clash in expectations between system and defense design breaks a few very important assumptions for an application's runtime environment. This forces most randomization mechanisms to abandon the hope of upholding memory deduplication. MARDU resolves this challenge by applying trampolines to securely reach functions protected under secure memory. Even with this new calling convention in place, MARDU shows re-randomization degradation can be significantly reduced without sacrificing randomization entropy. Moreover, MARDU shows it is capable of defeating prominent code re-use variants with this practical design.
This dissertation then presents its second practical exploit mitigation solution, BASTION. BASTION is a fine-grained system call filtering mechanism aimed at significantly strengthening the security surrounding system calls. Like MARDU, BASTION upholds the principles of this dissertation and was implemented with practicality in mind. BASTION's design is based on empirical observation of what a legitimate system call invocation consists of. BASTION introduces System Call Integrity to enforce the correct and intended use of system calls within a program. In order to enforce this novel security policy, BASTION proposes three new specialized contexts for the effective enforcement of legitimate system call usage. Namely, these contexts enforce that: system calls are only invoked with the correct calling convention, system calls are reached through legitimate control-flow paths, and all system call arguments are free from attacker corruption. By enforcing System Call Integrity with the previously mentioned contexts, this dissertation adds further evidence that context-sensitive defense strategies are superior to context-insensitive ones. BASTION is able to prevent over 32 real-world and synthesized exploits in its security evaluation and incurs negligible performance overhead (0.60%-2.01%). BASTION demonstrates that narrow and specialized exploit mitigation designs can be effective in more than one front, to the point that BASTION not only revents code re-use, but is capable of defending against any attack class that requires the utilization of system calls. / Doctor of Philosophy / Limited security defense deployment and weak security has serious consequences, as large scale cyber-attacks are now a common occurrence. This may be surprising since many defense techniques have been proposed; yet in reality, few have become dopted by the general public. To elaborate, designing an ideal defense that is strong security-wise but does not use any computer resources is challenging. In practice, there is no free lunch, and therefore a design must consider how to best balance security with performance in an effort to be practical for users to deploy their defense. Common tradeoffs include adverse side-effects such as slowing down user applications or imposing significant memory usage. Therefore, practical and strong defense design is important to promote integration into the next generation of computer hardware and software. By sustaining practical design, the needed jump between a proof-of-concept and implementing it on commodity computer chips is substantially smaller. A practical defense should foremost guarantee strong levels of security and should not slow down a user's applications. Ideally, a practical defense is implemented to the point it seems invisible to the user and they don't even notice it. However, balancing practicality with strong security is hard to achieve in practice.
This dissertation first reviews the current security landscape - specifically two important attack strategies are examined. First, code re-use attacks, are exactly what they sound like; code re-use essentially reuse various bits and pieces of program code to create an attack. Second, system call abuse. System calls are essential functions that ordinarily allow a user program to talk with a computer's operating system; they enable operations such as a program asking for more memory or reading and writing files. When system calls are maliciously abused, they can cause a computer to use up all its free memory or even launch an attacker-written program. This dissertation goes over how these attacks work and correspondingly explains popular defense strategies that have been proposed by the security community so far.
This dissertation then presents two defense system solutions that demonstrate how a practical defense system could be made. To that end, the full design, implementation, and evaluation of each defense system, named MARDU and BASTION, is presented. This dissertation leverages attack insights as well as compiler techniques to achieve its goal. A compiler is an essential developer tool that converts human written code into a computer program. Moreover, compilers can be used to apply additional optimizations and security hardening techniques to make a program more secure. This dissertation's first defense solution, MARDU, is a runtime randomization defense. MARDU protects programs by randomizing the location of code chunks throughout execution so that attackers cannot find the code pieces they need to create an attack. Notably, MARDU is the first randomization defense that is able to be seamlessly deployed system-wide and is backwards compatible with programs not outfitted with MARDU. This dissertation's second defense solution, BASTION, is a defense system that strictly focuses on protection of system calls in a program. As mentioned earlier, system calls are security critical functions that allow a program to talk a computer operating system. BASTION protects the entire computer by ensuring that every time a system call is called by a user program, it was rightfully requested by the program and not maliciously by an attacker. BASTION verifies this request is legitimate by confirming that the current program state meets a certain set of criteria.
|
33 |
Dynamic Modelling and Optimization of Polymerization Processes in Batch and Semi-batch Reactors. Dynamic Modelling and Optimization of Bulk Polymerization of Styrene, Solution Polymerization of MMA and Emulsion Copolymerization of Styrene and MMA in Batch and Semi-batch Reactors using Control Vector Parameterization Techniques.Ibrahim, W.H.B.W. January 2011 (has links)
Dynamic modelling and optimization of three different processes namely (a) bulk polymerization of styrene, (b) solution polymerization of methyl methacrylate (MMA) and (c) emulsion copolymerization of Styrene and MMA in batch and semi-batch reactors are the focus of this work. In this work, models are presented as sets of differential-algebraic equations describing the process. Different optimization problems such as (a) maximum conversion (Xn), (b) maximum number average molecular weight (Mn) and (c) minimum time to achieve the desired polymer molecular properties (defined as pre-specified values of monomer conversion and number average molecular weight) are formulated. Reactor temperature, jacket temperature, initial initiator concentration, monomer feed rate, initiator feed rate and surfactant feed rate are used as optimization variables in the optimization formulations. The dynamic optimization problems were converted into nonlinear programming problem using the CVP techniques which were solved using efficient SQP (Successive Quadratic Programming) method available within the gPROMS (general PROcess Modelling System) software.
The process model used for bulk polystyrene polymerization in batch reactors, using 2, 2 azobisisobutyronitrile catalyst (AIBN) as initiator was improved by including the gel and glass effects. The results obtained from this work when compared with the previous study by other researcher which disregarded the gel and glass effect in their study which show that the batch time operation are significantly reduced while the amount of the initial initiator concentration required increases. Also, the termination rate constant decreases as the concentration of the mixture increases, resulting rapid monomer conversion.
The process model used for solution polymerization of methyl methacrylate (MMA) in batch reactors, using AIBN as the initiator and Toluene as the solvent was improved by including the free volume theory to calculate the initiator efficiency, f. The effects of different f was examined and compared with previous work which used a constant value of f 0.53. The results of these studies show that initiator efficiency, f is not constant but decreases with the increase of monomer conversion along the process.
The determination of optimal control trajectories for emulsion copolymerization of Styrene and MMA with the objective of maximizing the number average molecular weight (Mn) and overall conversion (Xn) were carried out in batch and semi-batch reactors. The initiator used in this work is Persulfate K2S2O8 and the surfactant is Sodium Dodecyl Sulfate (SDS). Reduction of the pre-batch time increases the Mn but decreases the conversion (Xn). The sooner the addition of monomer into the reactor, the earlier the growth of the polymer chain leading to higher Mn. Besides that, Mn also can be increased by decreasing the initial initiator concentration (Ci0). Less oligomeric radicals will be produced with low Ci0, leading to reduced polymerization loci thus lowering the overall conversion. On the other hand, increases of reaction temperature (Tr) will decrease the Mn since transfer coefficient is increased at higher Tr leading to increase of the monomeric radicals resulting in an increase in termination reaction.
|
34 |
Extração de casos de teste utilizando Redes de Petri hierárquicas e validação de resultados utilizando OWL. / Test case extraction using hierarchical Petri Nets and results validation using OWL.Baumgartner Neto, August 27 April 2015 (has links)
Este trabalho propõe dois métodos para teste de sistemas de software: o primeiro extrai ideias de teste de um modelo desenvolvido em rede de Petri hierárquica e o segundo valida os resultados após a realização dos testes utilizando um modelo em OWL-S. Estes processos aumentam a qualidade do sistema desenvolvido ao reduzir o risco de uma cobertura insuficiente ou teste incompleto de uma funcionalidade. A primeira técnica apresentada consiste de cinco etapas: i) avaliação do sistema e identificação dos módulos e entidades separáveis, ii) levantamento dos estados e transições, iii) modelagem do sistema (bottom-up), iv) validação do modelo criado avaliando o fluxo de cada funcionalidade e v) extração dos casos de teste usando uma das três coberturas de teste apresentada. O segundo método deve ser aplicado após a realização dos testes e possui cinco passos: i) primeiro constrói-se um modelo em OWL (Web Ontology Language) do sistema contendo todas as informações significativas sobre as regras de negócio da aplicação, identificando as classes, propriedades e axiomas que o regem; ii) em seguida o status inicial antes da execução é representado no modelo através da inserção das instâncias (indivíduos) presentes; iii) após a execução dos casos de testes, a situação do modelo deve ser atualizada inserindo (sem apagar as instâncias já existentes) as instâncias que representam a nova situação da aplicação; iv) próximo passo consiste em utilizar um reasoner para fazer as inferências do modelo OWL verificando se o modelo mantém a consistência, ou seja, se não existem erros na aplicação; v) finalmente, as instâncias do status inicial são comparadas com as instâncias do status final, verificando se os elementos foram alterados, criados ou apagados corretamente. O processo proposto é indicado principalmente para testes funcionais de caixa-preta, mas pode ser facilmente adaptado para testes em caixa branca. Obtiveram-se casos de testes semelhantes aos que seriam obtidos em uma análise manual mantendo a mesma cobertura do sistema. A validação provou-se condizente com os resultados esperados, bem como o modelo ontológico mostrouse bem fácil e intuitivo para aplicar manutenções. / This paper proposes two test methods for system software testing: the first one extracts test workflow processes from a model developed in Hierarchical Petri Nets and the other validates results after test execution using a domain model in OWL-S. Both processes increase the quality of the system developed by reducing the risk of having an insufficient coverage or an incomplete functionality test. The first technique consists of five steps: i) system evaluation and identification of separable sub modules and entities, ii) identification of states and transitions, iii) system modeling (bottom-up), iv) validation of the created model by evaluating the workflow for each functionality, and v) extraction of test cases using one of the three test coverage presented. The second method must be applied after the execution of the previous method and has also five steps: i) first a system model in OWL (Web Ontology Language) is built containing all significant information and business rules of the application; ii) then, the initial status before the test execution is represented in the model by the insertion of the instances (individuals) presented; iii) after the execution of test cases, the state model is updated by inserting (without deleting already existing instances) new instances to represent the domain sate after test; iv) in the next step we use a reasoner to make OWL model checking inferences to prove model consistency, that is, if there is no error in the application; finally, the initial status instances is compared with the final status in order to verify if these instances have been changed, created or deleted correctly. The process is indicated for blackbox functional tests, but can be easily adapted for white-box tests. There was obtained test cases similar to those that will be obtained in a manual analysis keeping the same test coverage. Validation has proved to be consistent compare to the expected results. Also, the ontological model has showed to be easy and intuitive for maintenance.
|
35 |
Естимација потрошње енергије вишејезгарних наменских апликација / Estimacija potrošnje energije višejezgarnih namenskih aplikacija / Energy consumption estimation for embedded multicore applicationsLanguageKrunić Momčilo 07 February 2017 (has links)
<p>Докторска тема описује и анализира развој алата за профилисање и естимацију потрошње енергије наменских апликација. Апликације о којима је реч се развијају за вишејезгарну хетерогену платформу пројектовану са нагласком на ниској потрошњи енергије. Истраживање се односи на изналажење могућности прецизне процене количине енергије коју конзумира наменска DSP аппликација приликом обраде улазног сигнала. Резултат истраживања је израда прецизаног модела потрошње енергије који омогућује директну спрегу између програмског решења које се развија и количине енергије потребне за његово извршавање. Основни циљ истраживања је развој енергетски ефикасних програмских решења. Модел представљен у овом раду остварује зависност између утрошка енергије и програмског решења на инструкционом нивоу. Тестирањем модела кроз реалне апликације је остварена прецизна процена утрошене енергије.</p> / <p>Doktorska tema opisuje i analizira razvoj alata za profilisanje i estimaciju potrošnje energije namenskih aplikacija. Aplikacije o kojima je reč se razvijaju za višejezgarnu heterogenu platformu projektovanu sa naglaskom na niskoj potrošnji energije. Istraživanje se odnosi na iznalaženje mogućnosti precizne procene količine energije koju konzumira namenska DSP applikacija prilikom obrade ulaznog signala. Rezultat istraživanja je izrada precizanog modela potrošnje energije koji omogućuje direktnu spregu između programskog rešenja koje se razvija i količine energije potrebne za njegovo izvršavanje. Osnovni cilj istraživanja je razvoj energetski efikasnih programskih rešenja. Model predstavljen u ovom radu ostvaruje zavisnost između utroška energije i programskog rešenja na instrukcionom nivou. Testiranjem modela kroz realne aplikacije je ostvarena precizna procena utrošene energije.</p> / <p>PhD thesis describes and analyzes an approach to the development of the<br />tool for energy consumption profiling and estimation of embedded<br />applications aimed for multi-core heterogeneous platform designed with an<br />emphasis on low power consumption. The main purpose of this study was to<br />enable prediction of the amount of energy consumed by embedded DSP<br />application, when processing the input signal. The primary goal was to obtain<br />a precise model of energy consumption that will establish a direct link<br />between program solutions and the amount of energy required for its<br />execution, in order to develop energy-efficient software solutions. The model<br />presented in this paper achieves link between energy consumption and<br />program solutions at instructional level. The solution was tested against a<br />real applications and it has been established that prediction of consumed<br />energy have a high degree of accuracy.</p>
|
36 |
Прилог аутоматској паралелизацији секвенцијалног машинског кода / Prilog automatskoj paralelizaciji sekvencijalnog mašinskog koda / An approach to automatic parallelization of sequential machine codeMarinković Vladimir 24 September 2018 (has links)
<p>Докторска теза анализира подршку за вишејезгарне и многојезгарне системе у циљу повећања искоришћења њихове снаге. Предмет истраживања је проналажење решења које би без уплитања програмера (аутоматски) паралелизовало постојеће секвенцијалне програме на бинарном нивоу који се извршавају на једном језгру (или процесору). Резултат истраживања је израда решења и алата за паралелизацију секвенцијалног машинког кода, који самостално стварају програме који се извршавају паралелно на више језгара вишејезгарног процесора, и тиме постижу балансирано оптерећење процесора. Основни циљ је добијање убрзања извршења програмског кода на вишејезгарном процесору ради омогућавања рада у реланом времену за задата ограничења. Добијено решење би се могло искористити и за смањење потрошње смањивањем радног такта процесора уз задржавање полазног времена извршења програма.</p> / <p>Doktorska teza analizira podršku za višejezgarne i mnogojezgarne sisteme u cilju povećanja iskorišćenja njihove snage. Predmet istraživanja je pronalaženje rešenja koje bi bez uplitanja programera (automatski) paralelizovalo postojeće sekvencijalne programe na binarnom nivou koji se izvršavaju na jednom jezgru (ili procesoru). Rezultat istraživanja je izrada rešenja i alata za paralelizaciju sekvencijalnog mašinkog koda, koji samostalno stvaraju programe koji se izvršavaju paralelno na više jezgara višejezgarnog procesora, i time postižu balansirano opterećenje procesora. Osnovni cilj je dobijanje ubrzanja izvršenja programskog koda na višejezgarnom procesoru radi omogućavanja rada u relanom vremenu za zadata ograničenja. Dobijeno rešenje bi se moglo iskoristiti i za smanjenje potrošnje smanjivanjem radnog takta procesora uz zadržavanje polaznog vremena izvršenja programa.</p> / <p>PhD thesis analyzes a support for multicore and manycore systems in terms<br />of better processing power utilization. Purpose of this study is finding a<br />solution for automatic parallelization of existing sequential code which<br />executes on single core (or processor), at the binary level. The research<br />intents to develop a solution and tools for parallelization of the sequential<br />machine code, which can create a program running simultaneously on all the<br />cores of the multi-core processor, and for achieving optimal load-balancing.<br />The primary goal is obtaining execution speedup of the program running on<br />the multicore processor, for meeting real-time processing constraints. Given<br />solution could be also used for energy saving, by lowering system clock and<br />keeping program execution runtime.</p>
|
37 |
Extração de casos de teste utilizando Redes de Petri hierárquicas e validação de resultados utilizando OWL. / Test case extraction using hierarchical Petri Nets and results validation using OWL.August Baumgartner Neto 27 April 2015 (has links)
Este trabalho propõe dois métodos para teste de sistemas de software: o primeiro extrai ideias de teste de um modelo desenvolvido em rede de Petri hierárquica e o segundo valida os resultados após a realização dos testes utilizando um modelo em OWL-S. Estes processos aumentam a qualidade do sistema desenvolvido ao reduzir o risco de uma cobertura insuficiente ou teste incompleto de uma funcionalidade. A primeira técnica apresentada consiste de cinco etapas: i) avaliação do sistema e identificação dos módulos e entidades separáveis, ii) levantamento dos estados e transições, iii) modelagem do sistema (bottom-up), iv) validação do modelo criado avaliando o fluxo de cada funcionalidade e v) extração dos casos de teste usando uma das três coberturas de teste apresentada. O segundo método deve ser aplicado após a realização dos testes e possui cinco passos: i) primeiro constrói-se um modelo em OWL (Web Ontology Language) do sistema contendo todas as informações significativas sobre as regras de negócio da aplicação, identificando as classes, propriedades e axiomas que o regem; ii) em seguida o status inicial antes da execução é representado no modelo através da inserção das instâncias (indivíduos) presentes; iii) após a execução dos casos de testes, a situação do modelo deve ser atualizada inserindo (sem apagar as instâncias já existentes) as instâncias que representam a nova situação da aplicação; iv) próximo passo consiste em utilizar um reasoner para fazer as inferências do modelo OWL verificando se o modelo mantém a consistência, ou seja, se não existem erros na aplicação; v) finalmente, as instâncias do status inicial são comparadas com as instâncias do status final, verificando se os elementos foram alterados, criados ou apagados corretamente. O processo proposto é indicado principalmente para testes funcionais de caixa-preta, mas pode ser facilmente adaptado para testes em caixa branca. Obtiveram-se casos de testes semelhantes aos que seriam obtidos em uma análise manual mantendo a mesma cobertura do sistema. A validação provou-se condizente com os resultados esperados, bem como o modelo ontológico mostrouse bem fácil e intuitivo para aplicar manutenções. / This paper proposes two test methods for system software testing: the first one extracts test workflow processes from a model developed in Hierarchical Petri Nets and the other validates results after test execution using a domain model in OWL-S. Both processes increase the quality of the system developed by reducing the risk of having an insufficient coverage or an incomplete functionality test. The first technique consists of five steps: i) system evaluation and identification of separable sub modules and entities, ii) identification of states and transitions, iii) system modeling (bottom-up), iv) validation of the created model by evaluating the workflow for each functionality, and v) extraction of test cases using one of the three test coverage presented. The second method must be applied after the execution of the previous method and has also five steps: i) first a system model in OWL (Web Ontology Language) is built containing all significant information and business rules of the application; ii) then, the initial status before the test execution is represented in the model by the insertion of the instances (individuals) presented; iii) after the execution of test cases, the state model is updated by inserting (without deleting already existing instances) new instances to represent the domain sate after test; iv) in the next step we use a reasoner to make OWL model checking inferences to prove model consistency, that is, if there is no error in the application; finally, the initial status instances is compared with the final status in order to verify if these instances have been changed, created or deleted correctly. The process is indicated for blackbox functional tests, but can be easily adapted for white-box tests. There was obtained test cases similar to those that will be obtained in a manual analysis keeping the same test coverage. Validation has proved to be consistent compare to the expected results. Also, the ontological model has showed to be easy and intuitive for maintenance.
|
Page generated in 0.0561 seconds