Spelling suggestions: "subject:"[een] DYNAMIC ANALYSIS"" "subject:"[enn] DYNAMIC ANALYSIS""
51 |
Crescimento econômico, consumo e sustentabilidade: contribuições para um modelo de análise prospectiva dinâmica - o caso dos automóveis em São Paulo / Economic growth, consumption and sustainability: contributions to a dynamic prospective analysis model - the case of the automobiles in São PauloGuimarães, Leandro Fraga 19 May 2016 (has links)
Por volta de meados desse século, haverá 40 megacidades no mundo, quase o dobro das 21 que temos hoje. Cerca de um quinto delas terão mais de 30 milhões de habitantes na sua região metropolitana. A imensa maioria dessas quatro dezenas de enormes aglomerados urbanos se formará em países emergentes, onde as condições de desenvolvimento humano e urbano são menos propícias que o desejável para receber, num espaço tão curto, o desafio de acomodar tamanho contingente de novos habitantes. Esse crescimento poderá trazer consequências não muito simples de serem dimensionadas. Mas é possível desenvolver cenários que permitam antever algumas dessas consequências, para auxiliar no processo de planejamento dos eventos futuros. Para conseguir atingir o objetivo de elaborar cenários que pudessem ser, além de visões sobre o futuro, ferramentas que permitissem ensaios, testes de hipóteses e novas formulações, a elaboração do modelo aqui descrito contou com recursos que vieram de duas metodologias bastante testadas, casos da Dinâmica de Sistemas e da técnica Delphi. O que se procurou fazer foi vencer, com a junção de partes de cada uma delas, as limitações que cada metodologia tem na sua base original. Da Dinâmica de Sistemas, procurou-se aproveitar da completude da análise da cadeia de fatos intervenientes e relações de causa e efeito, sem adotar integralmente a complexidade da sua estrutura total, muito necessária para a montagem de modelos para estudar determinados problemas, mas demasiadamente ampla para questões que necessitam de uma abordagem menos densa. Da técnica Delphi foi utilizada a sua prática de consultas sequenciais a grupos de especialistas, de modo a dar consistência ao processo de simplificação do conjunto de variáveis gerado, sem, no entanto, perder a possibilidade de simular novos cenários apenas com a alteração dos valores e pesos das variáveis, o que, pela característica da metodologia Delphi, só seria possível numa pesquisa inteiramente nova. O conjunto de cenários aqui descrito permite algumas conclusões importantes tanto sobre São Paulo quanto sobre as projetadas 40 megacidades de 2040. E a diversidade de análises possíveis, o número praticamente ilimitado de cenários que podem ser gerados, e a relativa simplicidade para se elaborar e utilizar o modelo proposto foram objetivos que se quis alcançar nesse trabalho. / By the middle of this century, there will be 40 megacities in the world, almost twice as the 21 we have today. About a fifth of them will have more than 30 million inhabitants in its metropolitan area. The vast majority of these 40 huge urban centers will be formed in emerging countries, where human and urban development conditions are less favorable than the desirable to receive in such a short space of time, the challenge of accommodating this great contingent of new inhabitants. This growth can bring not very simple consequences to be solved. But it is possible to develop scenarios which shows some of the possible consequences, to support the planning process of future events. To be able to achieve the goal of developing scenarios that could be not only visions of the future but also tools that allow experimenting and the test of hypothesis and new formulations, the preparation of the model described here had features that came from two very well tested methodologies: System Dynamics and Delphi technique. What was sought was to overcome the limitations that each method has its original basis with the joint of some of their parts. From System Dynamics, we took advantage of the analysis completeness of the chain of intervenient facts and cause-effect relationships without fully embrace the complexity of the overall structure, much needed for the assembly of models for studying certain problems, but too wide for questions that require a less dense approach. Delphi technique was used for the practice of sequential consultation with expert groups in order to give consistency to the set of variables on the simplification process generated, without, however, losing the possibility to simulate new scenarios only with the changing values and weights of the variables, which, by the feature of the Delphi methodology, would only be possible in a completely new search. The set of scenarios described here allows some important conclusions about both São Paulo and the projected 40 megacities of 2040. And the diversity of possible analyzes, a virtually unlimited number of scenarios that can be generated, and the relative simplicity to develop and use the proposed model were objectives that were presented on this work.
|
52 |
Rozšíření modelů analýzy obalu dat a jejich aplikace v automobilovém průmyslu / Data Envelopment Analysis – extension and application in automotive industrySynková, Rut January 2003 (has links)
The Models of the Data Envelopment Analysis are the instruments for benchmarking of homogenous production units. The first models had been formulated by the end of 70th in the last century and since then have been the subject of interest in the theoretical area as well as in the analytical area. The thesis refers to the models of the Data Envelopment Analysis and its development in the theoretical area. The second goal of the thesis is an application of models of the Data Envelopment Analysis in industrial environment and illustration of its possible use by investment decision. The domestic literature on the area of the Data Envelopment Analysis has not been sufficient. The thesis is therefore mapping the present stage of knowledge in this area and further widens by the Allocation Models, Dynamic Analysis and the questions of non-controlled and imprecise variables. Models, which are formulated in the thesis, where afterwards applied on the data of the foreign companies. The numerical experiments were worked out by the software support for the models of the Data Envelopment Analysis, built in the environment of MS Excel. The main contributions of the thesis are in enlargement of the Dynamic Analysis by the partial continuous dynamic analysis and analysis of Efficiency Stability of the evaluated units. The other contribution is the wide application of the models of the Data Envelopment Analysis for benchmarking the companies, which mainly produce the motor vehicles. The thesis gives the overview of the often used models of the Data Envelopment Analysis as well as of the rarely used models. It discusses the possible solution to the special situations which may occur during the application. The thesis is divided into six chapters which, in typical case, contain next to the theory the illustrative application. The first chapter comprises of the subscription of the basic models of the Data Envelopment Analysis. In the second chapter, there are presented the Super-Efficiency Models and discussed the problems with zero inputs and outputs in these models. The overview of the Allocation Models is in following part. The Dynamic Analysis and analysis of Efficiency Stability are subjects of the fourth chapter. The fifth chapter is dedicated to non-controlled and imprecise variables. The last chapter focuses on application of the models in automotive industry and its possible use of results for estimated development of the evaluated units on Investment markets.
|
53 |
Metodologia dinâmica para avaliação da efetividade de otimização e exploração de localidade de valor. / Dynamic methodology for optimization effectiveness evaluation and value locality exploitation.Carlos Henrique Andrade Costa 24 September 2012 (has links)
O desempenho de um software depende das múltiplas otimizações no código realizadas por compiladores modernos para a remoção de computação redundante. A identificação de computação redundante é, em geral, indecidível em tempo de compilação, e impede a obtenção de um caso ideal de referência para a medição do potencial inexplorado de remoção de redundâncias remanescentes e para a avaliação da eficácia de otimização do código. Este trabalho apresenta um conjunto de métodos para a análise da efetividade de otimização de código através da observação do conjunto completo de instruções dinamicamente executadas e referências à memória na execução completa de um programa. Isso é feito por meio do desenvolvimento de um algoritmo de value numbering dinâmico e sua aplicação conforme as instruções vão sendo executadas. Este método reduz a análise interprocedural à análise de um grande bloco básico e detecta operações redundantes de memória e operações escalares que são visíveis apenas em tempo de execução. Desta forma, o trabalho estende a análise de reuso de instruções e oferece tanto uma aproximação mais exata do limite superior de otimização explorável dentro de um programa, quanto um ponto de referência para avaliar a eficácia de uma otimização. O método também provê uma visão clara de hotspots de redundância não explorados e uma medida de localidade de valor dentro da execução completa de um programa. Um modelo que implementa o método e integra-o a um simulador completo de sistema baseado em Power ISA 64-bits (versão 2.06) é desenvolvido. Um estudo de caso apresenta os resultados da aplicação deste método em relação a executáveis de um benchmark representativo (SPECInt2006) criados para cada nível de otimização do compilador GNU C/ C++. A análise proposta produz uma avaliação prática de eficácia da otimização de código que revela uma quantidade significativa de redundâncias remanescentes inexploradas, mesmo quando o maior nível de otimização disponível é usado. Fontes de ineficiência são identificadas através da avaliação de hotspots e de localidade de valor. Estas informações revelam-se úteis para o ajuste do compilador e da aplicação. O trabalho ainda apresenta um mecanismo eficiente para explorar o suporte de hardware na eliminação de redundâncias. / Software performance relies on multiple optimization techniques applied by modern compilers to remove redundant computation. The identification of redundant computation is in general undecidable at compile-time and prevents one from obtaining an ideal reference for the measurement of the remaining unexploited potential of redundancy removal and for the evaluation of code optimization effectiveness. This work presents a methodology for optimization effectiveness analysis by observing the complete dynamic stream of executed instructions and memory references in the whole program execution, and by developing and applying a dynamic value numbering algorithm as instructions are executed. This method reduces the interprocedural analysis to the analysis of a large basic block and detects redundant memory and scalar operations that are visible only at run-time. This way, the work extends the instruction-reuse analysis and provides both a more accurate approximation of the upper bound of exploitable optimization in the program and a reference point to evaluate optimization effectiveness. The method also generates a clear picture of unexploited redundancy hotspots and a measure of value locality in the whole application execution. A framework that implements the method and integrates it with a full-system simulator based on Power ISA 64-bit (version 2.06) is developed. A case study presents the results of applying this method to representative benchmark (SPECInt 2006) executables generated by various compiler optimization levels of GNU C/C++ Compiler. The proposed analysis yields a practical analysis that reveals a significant amount of remaining unexploited redundancies present even when using the highest optimization level available. Sources of inefficiency are identified with an evaluation of hotspot and value locality, an information that is useful for compilers and application-tuning softwares. The thesis also shows an efficient mechanism to explore hardware-support for redundancy elimination.
|
54 |
MARS: uma arquitetura para análise de malwares utilizando SDN. / MARS: an SDN-based malware analysis solution.João Marcelo Ceron 08 December 2017 (has links)
Detectar e analisar malwares é um processo essencial para aprimorar os sistemas de segurança. As soluções atuais apresentam limitações no processo de investigação e detecção de códigos maliciosos sofisticados. Mais do que utilizar técnicas para evadir sistemas de análise, malwares sofisticados requerem condições específicas no ambiente em que são executados para revelar seu comportamento malicioso. Com o surgimento das Redes Definidas por Software (SDN), notou-se uma oportunidade para aprimorar o processo de investigação de malware propondo uma arquitetura flexível apta a detectar variações comportamentais de maneira automática. Esta tese apresenta uma arquitetura especializada para analisar códigos maliciosos que permite controlar de maneira unificada o ambiente de análise, incluindo o sandbox e os elementos que o circundam. Dessa maneira, é possível gerenciar regras de contenção, configuração dinâmica de recursos, e manipular o tráfego de rede gerado pelos malwares. Para avaliar a arquitetura foi analisado um conjunto de malwares em dois cenários de avaliação. No primeiro cenário de avaliação, as funcionalidades descritas pela solução proposta revelaram novos eventos comportamentais em 100% dos malwares analisados. Já, no segundo cenários de avaliação, foi analisado um conjunto de malwares projetados para dispositivos IoT. Em consequência, foi possível bloquear ataques, monitorar a comunicação do malware com seu controlador de botnet, e manipular comandos de ataques. / Mechanisms to detect and analyze malicious software are essential to improve security systems. Current security mechanisms have limited success in detecting sophisticated malicious software. More than to evade analysis system, many malware require specific conditions to activate their actions in the target system. The flexibility of Software-Defined Networking (SDN) provides an opportunity to develop a malware analysis architecture that can detect behavioral deviations in an automated way. This thesis presents a specialized architecture to analyze malware by managing the analysis environment in a centralized way, including to control the sandbox and the elements that surrounds it. The proposed architecture enables to determine the network access policy, to handle the analysis environment resource configuration, and to manipulate the network connections performed by the malware. To evaluate our solution we have analyzed a set of malware in two evaluation scenarios. In the first evaluation scenario, we showed that the mechanisms proposed have increased the number of behavioral events in 100% of the malware analyzed. In the second evaluation scenario, we have analyzed malware designed for IoT devices. As a result, by using the MARS features, it was possible to block attacks, to manipulate attack commands, and to enable the malware communication with the respective botnet controller. The experimental results showed that our solution can improve the dynamic malware analysis process by providing this configuration flexibility to the analysis environment.
|
55 |
Validação de uma estrutura de análise ofensiva no basquetebol baseada no sequenciamento de dinâmicas de criação de espaço / Validation of an offensive framework to analyse basketball matches based on the sequencing of space creation dynamicsFelipe Luiz Santana 09 March 2016 (has links)
A inferência de estratégias ofensivas em esportes coletivos pode ser realizada a partir da análise dos padrões de jogo observados durante a disputa. Para que isso ocorra, há a necessidade da formalização de classes de comportamentos específicos para a modalidade de forma a discriminar perfis de jogo com base na identificação das ações mais recorrentes. No basquetebol as ações são encadeadas ao longo da posse de bola, sendo que os diferentes tipos de sequências de ações contêm características que os diferenciam e podem influenciar diretamente no desfecho do ataque. Nesse trabalho foi apresentada uma proposta contendo diferentes possibilidades de sequenciamento de dinâmicas ofensivas baseadas em um modelo teórico descrito na literatura. Os procedimentos de validação do sequenciamento de dinâmicas ofensivas e os testes de reprodutibilidade e objetividade realizados junto a técnicos de basquetebol apresentaram valores elevados demonstrando a consistência dos critérios para a elaboração de 27 tipos de concatenações dependentes (Qui-quadrado >0,78). Além disso, a estrutura desenvolvida foi concluída através da aplicação do constructo a jogos de basquetebol da liga profissional Americana (NBA) (28 partidas, dentre as quais 10 partidas do confronto entre Spurs x Thunder, 10 partidas referentes ao confronto entre Heat e Pacers e 8 partidas da disputa envolvendo Heat e Spurs, sendo analisados ambos os ataques em cada confronto, válidos pela temporada regular e na fase de playoffs). Os resultados gerados a partir da análise foram apresentados através de árvores de decisão e grafos de modo a facilitar a visualização dos comportamentos identificados. A árvore de decisão apresentou as ações na sequência exata em que ocorreram nas posses de bola, enquanto os grafos mostraram os encadeamentos mais recorrentes entre duas dinâmicas ofensivas. Assim ambas as técnicas se mostraram complementares e auxiliaram na observação e análise dos perfis de jogo de cada equipe e na realização de inferências acerca de sua estratégia ofensiva. A formalização dos tipos de sequenciamento de ações ofensivas pode auxiliar treinadores e profissionais do basquetebol no desenho de estratégias, análise dos padrões de suas equipes e adversários e estruturação de sessões de treinamento que considerem os comportamentos ofensivos de modo dinâmico e contextualizado dentro de um encadeamento lógico de ações de jogo / The inference of offensive strategy in team sports can be performed through the analysis of game patterns observed during the games. To achieve such a goal, formal classes of behavior have to be developed for each sport to discriminate the most recurrent offensive actions. In basketball, the game actions are usually linked along a ball possession and different types of sequences of actions contain features that differentiate then and can directly influence the offense outcome. In this study a proposal containing different possibilities of offensive dynamic sequencing based on a theoretical model described in the literature was presented. Validation procedures of sequencing offensive dynamics and reliability and objectivity tests performed with basketball coaches presented high score demonstrating the consistency of the developed criteria and resulting in the creation of 27 types of dependent concatenations (Chi-square >0,78). In addition, the developed structure was completed by applying the construct in professional basketball games of National Basketball Association (NBA) (28 games, of which 10 matches related to the dispute between Spurs vs Thunder, 10 matches between Heat and Pacers and 8 matches from the dispute involving Heat and Spurs, being analyzed both attacks on each match, valid for regular season and playoffs). The results from the data analysis were presented by decision trees and graphs to facilitate the visualization of the identified behaviors. Decision trees presented the offensive actions in the exact sequence in which they occurred in ball possessions while the graphs showed the most recurrent concatenation between two offensive dynamics. Both techniques showed to be complementary and allowed the observation and analysis of game profiles of each team and to make inferences about offensive strategy. The formalization of types os sequencing of offensive actions can help coaches and basketball professionals to design game strategies, to analyse offensive patterns from their teams and the opponets and to structuring sessions of practice that consider offensive behaviors in a dynamic view and contextualized within a logical sequence of game actions
|
56 |
MARS: uma arquitetura para análise de malwares utilizando SDN. / MARS: an SDN-based malware analysis solution.Ceron, João Marcelo 08 December 2017 (has links)
Detectar e analisar malwares é um processo essencial para aprimorar os sistemas de segurança. As soluções atuais apresentam limitações no processo de investigação e detecção de códigos maliciosos sofisticados. Mais do que utilizar técnicas para evadir sistemas de análise, malwares sofisticados requerem condições específicas no ambiente em que são executados para revelar seu comportamento malicioso. Com o surgimento das Redes Definidas por Software (SDN), notou-se uma oportunidade para aprimorar o processo de investigação de malware propondo uma arquitetura flexível apta a detectar variações comportamentais de maneira automática. Esta tese apresenta uma arquitetura especializada para analisar códigos maliciosos que permite controlar de maneira unificada o ambiente de análise, incluindo o sandbox e os elementos que o circundam. Dessa maneira, é possível gerenciar regras de contenção, configuração dinâmica de recursos, e manipular o tráfego de rede gerado pelos malwares. Para avaliar a arquitetura foi analisado um conjunto de malwares em dois cenários de avaliação. No primeiro cenário de avaliação, as funcionalidades descritas pela solução proposta revelaram novos eventos comportamentais em 100% dos malwares analisados. Já, no segundo cenários de avaliação, foi analisado um conjunto de malwares projetados para dispositivos IoT. Em consequência, foi possível bloquear ataques, monitorar a comunicação do malware com seu controlador de botnet, e manipular comandos de ataques. / Mechanisms to detect and analyze malicious software are essential to improve security systems. Current security mechanisms have limited success in detecting sophisticated malicious software. More than to evade analysis system, many malware require specific conditions to activate their actions in the target system. The flexibility of Software-Defined Networking (SDN) provides an opportunity to develop a malware analysis architecture that can detect behavioral deviations in an automated way. This thesis presents a specialized architecture to analyze malware by managing the analysis environment in a centralized way, including to control the sandbox and the elements that surrounds it. The proposed architecture enables to determine the network access policy, to handle the analysis environment resource configuration, and to manipulate the network connections performed by the malware. To evaluate our solution we have analyzed a set of malware in two evaluation scenarios. In the first evaluation scenario, we showed that the mechanisms proposed have increased the number of behavioral events in 100% of the malware analyzed. In the second evaluation scenario, we have analyzed malware designed for IoT devices. As a result, by using the MARS features, it was possible to block attacks, to manipulate attack commands, and to enable the malware communication with the respective botnet controller. The experimental results showed that our solution can improve the dynamic malware analysis process by providing this configuration flexibility to the analysis environment.
|
57 |
The Behavior of Moment Resisting Steel Frames Under Seismic Excitation with Variation of Geometric Dimensions of Architectural SetbacksKayikci, Duygu y 12 May 2011 (has links)
This study investigates seismic response of the Moment-Resisting-Steel Frames (MRSF) with the architectural setbacks. The main objective of the study is to understand the variation of the elastic and inelastic, static and dynamic behavior with changes in the geometric dimensions of the tower portion. A second objective of the study is to determine the adequacy of the analysis procedures of various rigors, specified in current seismic design provision, in predicting those behaviors for MRSF with various size of setback. The analytical study is conducted using a regular and 16 irregular models to capture all possible combinations of configuration of setback in five-story, five-bay MRSFs. An irregular model is developed by gradually changing the horizontal and vertical dimensions of the tower portion of the regular base 2D frame-model. All models were designed for (a) equal global displacement and uniform distribution of inter-story drift under First-Mode (FM) lateral force distribution pattern at first significant yield, and (b) equal period of vibration at the first mode, using Nonlinear Static Seismic analysis procedure. Among the conclusions derived from the research is that the variation of (a) the elastic and inelastic inter-story drift, the ductility demand for the top three stories, and (b) the elastic and inelastic global displacement exhibited a pattern similar to the variation of the FM participation factor at the roof, PF1Φr,1. The square-root-of-sum-of-square (SRSS) distribution provided accurate estimates of elastic story shear and inter-story drift demand as well as the story yield strength and drift.
|
58 |
Dynamic analysis of multiple-body floating platforms coupled with mooring lines and risersKim, Young-Bok 30 September 2004 (has links)
A computer program, WINPOST-MULT, is developed for the dynamic analysis of a multiple-body floating system coupled with mooring lines and risers in the presence of waves, winds and currents. The coupled dynamics program for a single platform is extended for analyzing multiple-body systems by including all the platforms, mooring lines and risers in a combined matrix equation in the time domain. Compared to the iteration method between multiple bodies, the combined matrix method can include the full hydrodynamic interactions among bodies. The floating platform is modeled as a rigid body with six degrees of freedom. The first- and second-order wave forces, added mass coefficients, and radiation damping coefficients are calculated from the hydrodynamics program WAMIT for multiple bodies. Then, the time series of wave forces are generated in the time domain based on the two-term Volterra model. The wind forces are separately generated from the input wind spectrum and wind force formula. The current is included in Morison's drag force formula. In case of FPSO, the wind and current forces are generated using the respective coefficients given in the OCIMF data sheet. A finite element method is derived for the long elastic element of an arbitrary shape and material. This newly developed computer program is first applied to the system of a turret-moored FPSO and a shuttle tanker in tandem mooring. The dynamics of the turret-moored FPSO in waves, winds and currents are verified against independent computation and OTRC experiment. Then, the simulations for the FPSO-shuttle system with a hawser connection are carried out and the results are compared with the simplified methods without considering or partially including hydrodynamic interactions.
|
59 |
Benchmarking Points-to AnalysisGutzmann, Tobias January 2013 (has links)
Points-to analysis is a static program analysis that, simply put, computes which objects created at certain points of a given program might show up at which other points of the same program. In particular, it computes possible targets of a call and possible objects referenced by a field. Such information is essential input to many client applications in optimizing compilers and software engineering tools. Comparing experimental results with respect to accuracy and performance is required in order to distinguish the promising from the less promising approaches to points-to analysis. Unfortunately, comparing the accuracy of two different points-to analysis implementations is difficult, as there are many pitfalls in the details. In particular, there are no standardized means to perform such a comparison, i.e, no benchmark suite - a set of programs with well-defined rules of how to compare different points-to analysis results - exists. Therefore, different researchers use their own means to evaluate their approaches to points-to analysis. To complicate matters, even the same researchers do not stick to the same evaluation methods, which often makes it impossible to take two research publications and reliably tell which one describes the more accurate points-to analysis. In this thesis, we define a methodology on how to benchmark points-to analysis. We create a benchmark suite, compare three different points-to analysis implementations with each other based on this methodology, and explain differences in analysis accuracy. We also argue for the need of a Gold Standard, i.e., a set of benchmark programs with exact analysis results. Such a Gold Standard is often required to compare points-to analysis results, and it also allows to assess the exact accuracy of points-to analysis results. Since such a Gold Standard cannot be computed automatically, it needs to be created semi-automatically by the research community. We propose a process for creating a Gold Standard based on under-approximating it through optimistic (dynamic) analysis and over-approximating it through conservative (static) analysis. With the help of improved static and dynamic points-to analysis and expert knowledge about benchmark programs, we present a first attempt towards a Gold Standard. We also provide a Web-based benchmarking platform, through which researchers can compare their own experimental results with those of other researchers, and can contribute towards the creation of a Gold Standard.
|
60 |
Entropy, information theory and spatial input-output analysisBatten, David F. January 1981 (has links)
Interindustry transactions recorded at a macro level are simply summations of commodity shipment decisions taken at a micro level. The resulting statistical problem is to obtain minimally biased estimates of commodity flow distributions at the disaggregated level, given various forms of aggregated information. This study demonstrates the application of the entropy-maximizing paradigm in its traditional form, together with recent adaptations emerging from information theory, to the area of spatial and non-spatial input-output analysis. A clear distinction between the behavioural and statistical aspects of entropy modelling is suggested. The discussion of non-spatial input-output analysis emphasizes the rectangular and dynamic extensions of Leontief's original model, and also outlines a scheme for simple aggregation, based on a criterion of minimum loss of information. In the chapters on spatial analysis, three complementary approaches to the estimation of interregional flows are proposed. Since the static formulations cannot provide an accurate picture of the gross interregional flows between any two sectors, Leontief's dynamic framework is adapted to the problem. The study concludes by describing a hierarchical system of models to analyse feasible paths of economic development over space and time. / digitalisering@umu
|
Page generated in 0.4429 seconds