541 |
Spectrum Sensing in Cognitive Radio NetworksBokharaiee Najafee, Simin 07 1900 (has links)
Given the ever-growing demand for radio spectrum, cognitive radio has recently emerged as an attractive wireless communication technology. This dissertation is concerned with developing spectrum sensing algorithms in cognitive radio networks where a single or multiple cognitive radios (CRs) assist in detecting licensed primary bands employed by single or multiple primary users.
First, given that orthogonal frequency-division multiplexing (OFDM) is an important
wideband transmission technique, detection of OFDM signals in low-signal-to-noise-ratio scenario is studied. It is shown that the cyclic prefix correlation coefficient (CPCC)-based spectrum sensing algorithm, which was previously introduced as a simple and computationally efficient spectrum-sensing method for OFDM signals, is a special case of the constrained generalized likelihood ratio test (GLRT) in the absence of multipath.
The performance of the CPCC-based algorithm degrades in a multipath scenario. However when OFDM is implemented, by employing the inherent structure of OFDM signals and exploiting multipath correlation in the GLRT algorithm a simple and low-complexity algorithm called the multipath-based constrained-GLRT (MP-based C-GLRT) algorithm is obtained. Further performance improvement is achieved by combining both the CPCC- and MP-based C-GLRT algorithms. A simple GLRT-based detection algorithm is also developed for unsynchronized OFDM signals.
In the next part of the dissertation, a cognitive radio network model with multiple CRs is considered in order to investigate the benefit of collaboration and diversity in improving the overall sensing performance. Specially, the problem of decision fusion for cooperative spectrum sensing is studied when fading channels are present between the CRs and the fusion center (FC). Noncoherent transmission schemes with on-off keying (OOK) and binary frequency-shift keying (BFSK) are employed to transmit the binary decisions to the FC. The aim is to maximize the achievable secondary throughput of the CR network.
Finally, in order to reduce the required transmission bandwidth in the reporting phase of the CRs in a cooperative sensing scheme, the last part of the dissertation examines nonorthogonal transmission of local decisions by means of on-off keying. Proposed and analyzed is a novel decoding-based fusion rule for combining the hard decisions in a linear manner.
|
542 |
Mining Multi-Dimension Rules in Multiple Database Segmentation-on Examples of Cross Selling吳家齊, Wu,Chia-Chi Unknown Date (has links)
在今日以客戶為導向的市場中,“給較好的客戶較好的服務”的概念已經逐漸轉變為“給每一位客戶適當的服務”。藉由跨域行銷(cross-selling)的方式,企業可以為不同的客戶提供適當的服務及商品組合。臺灣的金融業近年來在金融整合中陸續成立了多家金融控股公司,希望藉由銀行、保險與證券等領域統籌資源與資本集中,以整合旗下子公司達成跨領域的共同行銷。這種新的行銷方式需要具有表達資料項目間關係的資訊技術,而關聯規則(association rule)是一種支援共同行銷所需之資料倉儲中的極重要元件。
傳統關聯規則的挖掘可以用來找出交易資料庫中客戶潛在的消費傾向。如果得以進一步的鎖定是那些客戶在什麼時間、什麼地點具有這種消費傾向,我們可藉此制定更精確、更具獲利能力的行銷策略。然而,大部分的相關習成技術都假設挖掘出的規則在資料庫的每一個區間都是一樣有效的,然而這顯然不符合大多數的現實狀況。
本研究主要著眼於如何有效率的在不同維度、不同大小的資料庫區域中挖掘關聯規則。藉此發展出可以自動在資料庫中產生分割的機制。就此,本研究提出一個方法找出在各個分割中成立的關聯規則,此一方法具有以下幾個優點:
1. 對於找出的關聯規則,可以進一步界定此規則在資料庫的那些區域成立。
2. 對於使用者知識以及資料庫重覆掃瞄次數的要求低於先前的方法。
3. 藉由保留中間結果,此一方法可以做到增量模式的規則挖掘。
本研究舉了兩個例子來驗證所提出的方法,結果顯示本方法具有效率及可規模化方面均較以往之方法為優。 / In today’s customer-oriented market, vision of “For better customer, the better service” becomes “For every customer, the appropriate service”. Companies can develop composite products to satisfy customer needs by cross-selling. In Taiwan’s financial sector, many financial holding companies have been consecutively founded recently. By pooling the resources and capital for banking, insurance, and securities, these financial holding companies would like to integration information resources from subsidiary companies for cross-selling. This new promotion method needs the information technology which can present the relationship between items, and association rule is an important element in data warehouse which supports cross-selling.
Traditional association rule can discover some customer purchase trend in a transaction database. The further exploration into targets as when, where and what kind of customers have this purchase trend that we chase, the more precise information that we can retrieve to make accurate and profitable strategies. Moreover, most related works assume that the rules are effective in database thoroughly, which obviously does not work in the majority of cases.
The aim of this paper is to discover correspondent rules from different zones in database. We develop a mechanism to produce segmentations with different granularities related to each dimension, and propose an algorithm to discover association rules in all the segmentations. The advantages of our method are:
1. The rules which only hold in several segmentations of database will be picked up by our algorithm.
2. Mining all association rules in all predefined segmentations with less user prior knowledge and redundant database scans than previous methods.
3. By keeping the intermediate results of the algorithm, we can implement an incremental mining.
We give two examples to evaluate our method, and the results show that our method is efficient and effective.
|
543 |
以規則為基礎的分類演算法:應用粗糙集 / A Rule-Based classification algorithm: a rough set approach廖家奇, Liao, Chia Chi Unknown Date (has links)
在本論文中,我們提出了一個以規則為基礎的分類演算法,名為ROUSER(ROUgh SEt Rule),它利用粗糙集理論作為搜尋啟發的基礎,進而建立規則。我們使用一個已經被廣泛利用的工具實作ROUSER,也使用數個公開資料集對它進行實驗,並將它應用於真實世界的案例。
本論文的初衷可被追溯到一個真實世界的案例,而此案例的目標是從感應器所蒐集的資料中找出與機械故障之間的關聯。為了能支援機械故障的根本原因分析,我們設計並實作了一個以規則為基礎的分類演算法,它所產生的模型是由人類可理解的決策規則所組成,而故障的徵兆與原因則被決策規則所連結。此外,資料中存在著矛盾。舉例而言,不同時間點所蒐集的兩筆紀錄極為相似、甚至相同(除了時間戳記),但其中一筆紀錄與機械故障相關,另一筆則否。本案例的挑戰在於分析矛盾的資料。我們使用粗糙集理論克服這個難題,因為它可以處理不完美知識。
研究者們已經提出了各種不同的分類演算法,而實踐者們則已經將它們應用於各種領域,然而多數分類演算法的設計並不強調演算法所產生模型的可解釋性與可理解性。ROUSER的設計是專門從名目資料中萃取人類可理解的決策規則。而ROUSER與其它多數規則分類演算法不同的地方是利用粗糙集方法選取特徵。ROUSER也提供了數種方式來選擇合宜的屬性與值配對,作為規則的前項。此外,ROUSER的規則產生方法是基於separate-and-conquer策略,因此比其它基於粗糙集的分類演算法所廣泛採用的不可分辨矩陣方法還有效率。
我們進行延伸實驗來驗證ROUSER的能力。對於名目資料的實驗裡,ROUSER在半數的結果中的準確率可匹敵、甚至勝過其他以規則為基礎的分類演算法以及決策樹分類演算法。ROUSER也可以在一些離散化的資料集之中達到可匹敵甚至超越的準確率。我們也提供了內建的特徵萃取方法與其它方法的比較的實驗結果,以及數種用來決定規則前項的方法的實驗結果。 / In this thesis, we propose a rule-based classification algorithm named ROUSER (ROUgh SEt Rule), which uses the rough set theory as the basis of the search heuristics in the process of rule generation. We implement ROUSER using a well developed and widely used toolkit, evaluate it using several public data sets, and examine its applicability using a real-world case study.
The origin of the problem addressed in this thesis can be traced back to a real-world problem where the goal is to determine whether a data record collected from a sensor corresponds to a machine fault. In order to assist in the root cause analysis of the machine faults, we design and implement a rule-based classification algorithm that can generate models consisting of human understandable decision rules to connect symptoms to the cause. Moreover, there are contradictions in data. For example, two data records collected at different time points are similar, or the same (except their timestamps), while one is corresponding to a machine fault but not the other. The challenge is to analyze data with contradictions. We use the rough set theory to overcome the challenge, since it is able to process imperfect knowledge.
Researchers have proposed various classification algorithms and practitioners have applied them to various application domains, while most of the classification algorithms are designed without a focus on interpretability or understandability of the models built using the algorithms. ROUSER is specifically designed to extract human understandable decision rules from nominal data. What distinguishes ROUSER from most, if not all, other rule-based classification algorithms is that it utilizes a rough set approach to select features. ROUSER also provides several ways to decide an appropriate attribute-value pair for the antecedents of a rule. Moreover, the rule generation method of ROUSER is based on the separate-and-conquer strategy, and hence it is more efficient than the indiscernibility matrix method that is widely adopted in the classification algorithms based on the rough set theory.
We conduct extensive experiments to evaluate the capability of ROUSER. On about half of the nominal data sets considered in experiments, ROUSER can achieve comparable or better accuracy than do classification algorithms that are able to generate decision rules or trees. On some of the discretized data sets, ROUSER can achieve comparable or better accuracy. We also present the results of the experiments on the embedded feature selection method and several ways to decide an appropriate attribute-value pair for the antecedents of a rule.
|
544 |
Ensaios sobre política monetária e fiscal no BrasilCaetano, Sidney Martins January 2007 (has links)
Esta tese apresenta três ensaios sobre política monetária e fiscal dentro do atual regime de metas de inflação. O primeiro ensaio buscou estudar uma possível integração monetária-fiscal ao determinar uma regra ótima de política monetária com restrição fiscal, analisando os efeitos de diversas preferências sobre a regra ótima em função da alteração dos pesos dados para os desvios da razão superávit primário/PIB em relação à sua meta pré-estabelecida. Os resultados mostraram que a regra ótima obtida apresenta uma resposta negativa das taxas de juros aos choques na relação dívida/PIB. Ainda, superávits primários/PIB maiores permitiriam reduções maiores nas taxas de juros e proporcionais aos pesos que essa variávelobjetivo teria na função de perda social. Do ponto de vista tradicional do mecanismo de transmissão da política monetária, a resposta positiva das taxas de juros a uma desvalorização real do câmbio e a uma elevação do prêmio de risco seria mantida. Portanto, os resultados sugerem que a adoção de uma meta explícita para o superávit primário/PIB tem conseqüências positivas sobre a regra ótima de política monetária e para a redução da taxa de juros, bem como na eficiência do atual instrumento de política monetária. O segundo ensaio buscou analisar a relação risco default através do modelo de regressão beta, bem como os impactos que os superávits primários podem trazer sobre o prêmio de risco e, consequentemente, sobre o câmbio. Do ponto de vista da relação default risk, ancorada no modelo de Blanchard (2004/2005), as estimativas baseadas no modelo de regressão beta para as quatro relações propostas neste ensaio apresentaram sinais estatisticamente significativos e compatíveis com a teoria. O fato interessante nos resultados referente ao período do regime de metas de inflação é que as estimativas indicaram uma relação direta e forte entre o superávit primário/PIB e a probabilidade de default; evidências que destacam a importância dos efeitos indiretos que o superávit pode gerar sobre o juro doméstico. O terceiro ensaio analisou a dinâmica discreta da taxa de juros SELIC-meta definida nas reuniões do Comitê de Política Monetária (COPOM). Dois métodos foram utilizados para estudar a possibilidade de o COPOM reduzir/manter/aumentar a taxa de juros básica: probit binomial e probit multinomial. Os resultados mostraram que os desvios de inflação e o hiato do produto são variáveis relevantes para explicar as decisões do COPOM. O modelo probit binomial aplicado para os casos de aumento e redução da taxa SELIC-meta mostraram que a inclusão da variável fiscal gerou melhores resultados. Para o caso agregado, método probit multinomial, os resultados indicaram que a inclusão da variável fiscal combinada com a expectativa de inflação gerou os melhores resultados relativamente aos demais casos. Assim, a resposta do COPOM a resultados fiscais bem como às expectativas do mercado quanto à inflação demonstraram ser os sinais que devem ser observados pelo mercado. / This thesis presents three essays on monetary and fiscal policy of the current regimen of inflation targeting. The first essay searched to study an integration monetary-fiscal when determining an optimal rule of monetary policy with fiscal restriction, analyzing the effect of diverse preferences on the optimal rule in function of the alteration of the weights given for the deviations of the surplus primary as a fraction of GDP in relation to its established targets. The results show that the gotten optimal rule presents a negative reply of the interest rates to the shocks in the debtto- GDP ratio. Primary surplus still bigger would allow bigger reductions in the interest rates and proportional to the weights that this variable-objective would have in the function of social loss. Of the traditional point of view of the mechanism of transmission of the monetary policy, the positive reply of the interest rates to a real depreciation of the exchange and to a rise of the risk premium it would be kept. Therefore, the results suggest that the adoption of explicit targets for the primary surplus in percentage of the GDP has positive consequences on the optimal rule of monetary policy and for the reduction of the interest rates, as well as in the efficiency of the current instrument of monetary policy. The second essay searched to analyze the relation default risk through of the beta regression model, as well as the impacts that primary surplus can bring on the risk premium and, consequently, on the exchange rate. Of the point of view of the relation default risk, anchored in the model of Blanchard (2004/2005), the estimates based on the beta regression model for the four relations proposals in the study had presented significant and compatible signals with the theory. The interesting fact in the results referring to the period of the regimen of inflation targeting is that the estimates had indicated a negative and strong relation between the primary surplus/GDP and the probability of default, evidences that detaching the importance of the positive and indirect impact of the surplus in relation to the interests rate domestic. The third analyzes the discrete dynamics of the SELIC interest rates-target defined in the meetings of the Brazilian Monetary Policy Council (COPOM). Two methods were applied in order to study the possibility of COPOM to reduce/maintain/increase the interest rates: probit model and multinomial probit. It was verified that the deviations of inflation and the GDP gap must be considered importants variables to explain the COPOM’s decisions. The probit model was applied to the cases of the increases probabilies and reduces probabilities showing that the inclusion of a fiscal variable generates better results. To the aggregated case, multinominal probit method, the results indicates that the inclusion of a fiscal variables combined with the inflation expectations generates better results than other possibilities. So, the responses of COPOM to the fiscal results as well as inflation expectations were the reals signs to be considered for the market.
|
545 |
Constitutionalism and judicial appointment as a means of safeguarding judicial independence in selected African jurisdictionsMakama, Saul Porsche 11 1900 (has links)
The beginning of the 1990s saw many African countries embarking on the process of drafting
new constitutions as they abandoned independence constitutions. Most of the independence
constitutions were perceived as constitutions without constitutionalism and they were
generally blamed for failure of democracy and the rule of law in Africa.
The study analyses the state of democracy and constitutionalism and the impact that
colonialism had on the African continent. Apart from the spurt of new constitutions adopted,
democracy is growing very slowly in most African states with widespread human rights
violations and disregard for the rule of law and the principle of separation of powers, still
holding the centre stage.
Judicial independence is an important component of democracy in the modern state. The
study therefore scrutinizes how the principle of judicial independence can be promoted and
protected to enhance democracy. One important mechanism which plays a crucial role in
safeguarding judicial independence is the way judicial officers are appointed. The study
selects four countries – Swaziland, Kenya, Zimbabwe and South Africa and analyses how
judicial officers are appointed in these countries in an effort to find an effective and optimal
approach.The premise of the study is centred on the role of constitutionalism and the process of
appointing judges as a means of promoting and safeguarding democracy in these selected
countries. / Public, Constitutional, and International Law / LLM
|
546 |
Análise de desempenho dos algoritmos Apriori e Fuzzy Apriori na extração de regras de associação aplicados a um Sistema de Detecção de Intrusos. / Performance analysis of algorithms Apriori and Fuzzy Apriori in association rules mining applied to a System for Intrusion Detection.Ricardo Ferreira Vieira de Castro 20 February 2014 (has links)
A extração de regras de associação (ARM - Association Rule Mining) de dados quantitativos tem sido pesquisa de grande interesse na área de mineração de dados. Com o crescente aumento das bases de dados, há um grande investimento na área de pesquisa na criação de algoritmos para melhorar o desempenho relacionado a quantidade de regras, sua relevância e a performance computacional. O algoritmo APRIORI, tradicionalmente usado na extração de regras de associação, foi criado originalmente para trabalhar com atributos categóricos. Geralmente, para usá-lo com atributos contínuos, ou quantitativos, é necessário transformar os atributos contínuos, discretizando-os e, portanto, criando categorias a partir dos intervalos discretos. Os métodos mais tradicionais de discretização produzem intervalos com fronteiras sharp, que podem subestimar ou superestimar elementos próximos dos limites das partições, e portanto levar a uma representação imprecisa de semântica. Uma maneira de tratar este problema é criar partições soft, com limites suavizados. Neste trabalho é utilizada uma partição fuzzy das variáveis contínuas, que baseia-se na teoria dos conjuntos fuzzy e transforma os atributos quantitativos em partições de termos linguísticos. Os algoritmos de mineração de regras de associação fuzzy (FARM - Fuzzy Association Rule Mining) trabalham com este princípio e, neste trabalho, o algoritmo FUZZYAPRIORI, que pertence a esta categoria, é utilizado. As regras extraídas são expressas em termos linguísticos, o que é mais natural e interpretável pelo raciocício humano. Os algoritmos APRIORI tradicional e FUZZYAPRIORI são comparado, através de classificadores associativos, baseados em regras extraídas por estes algoritmos. Estes classificadores foram aplicados em uma base de dados relativa a registros de conexões TCP/IP que destina-se à criação de um Sistema de Detecção de Intrusos. / The mining of association rules of quantitative data has been of great research interest in the area of data mining. With the increasing size of databases, there is a large investment in research in creating algorithms to improve performance related to the amount of rules, its relevance and computational performance. The APRIORI algorithm, traditionally used in the extraction of association rules, was originally created to work with categorical attributes. In order to use continuous attributes, it is necessary to transform the continuous attributes, through discretization, into categorical attributes, where each categorie corresponds to a discrete interval. The more traditional discretization methods produce intervals with sharp boundaries, which may underestimate or overestimate elements near the boundaries of the partitions, therefore inducing an inaccurate semantical representation. One way to address this problem is to create soft partitions with smoothed boundaries. In this work, a fuzzy partition of continuous variables, which is based on fuzzy set theory is used. The algorithms for mining fuzzy association rules (FARM - Fuzzy Association Rule Mining) work with this principle, and, in this work, the FUZZYAPRIORI algorithm is used. In this dissertation, we compare the traditional APRIORI and the FUZZYAPRIORI, through classification results of associative classifiers based on rules extracted by these algorithms. These classifiers were applied to a database of records relating to TCP / IP connections that aims to create an Intrusion Detection System.
|
547 |
Ensaios sobre política monetária e fiscal no BrasilCaetano, Sidney Martins January 2007 (has links)
Esta tese apresenta três ensaios sobre política monetária e fiscal dentro do atual regime de metas de inflação. O primeiro ensaio buscou estudar uma possível integração monetária-fiscal ao determinar uma regra ótima de política monetária com restrição fiscal, analisando os efeitos de diversas preferências sobre a regra ótima em função da alteração dos pesos dados para os desvios da razão superávit primário/PIB em relação à sua meta pré-estabelecida. Os resultados mostraram que a regra ótima obtida apresenta uma resposta negativa das taxas de juros aos choques na relação dívida/PIB. Ainda, superávits primários/PIB maiores permitiriam reduções maiores nas taxas de juros e proporcionais aos pesos que essa variávelobjetivo teria na função de perda social. Do ponto de vista tradicional do mecanismo de transmissão da política monetária, a resposta positiva das taxas de juros a uma desvalorização real do câmbio e a uma elevação do prêmio de risco seria mantida. Portanto, os resultados sugerem que a adoção de uma meta explícita para o superávit primário/PIB tem conseqüências positivas sobre a regra ótima de política monetária e para a redução da taxa de juros, bem como na eficiência do atual instrumento de política monetária. O segundo ensaio buscou analisar a relação risco default através do modelo de regressão beta, bem como os impactos que os superávits primários podem trazer sobre o prêmio de risco e, consequentemente, sobre o câmbio. Do ponto de vista da relação default risk, ancorada no modelo de Blanchard (2004/2005), as estimativas baseadas no modelo de regressão beta para as quatro relações propostas neste ensaio apresentaram sinais estatisticamente significativos e compatíveis com a teoria. O fato interessante nos resultados referente ao período do regime de metas de inflação é que as estimativas indicaram uma relação direta e forte entre o superávit primário/PIB e a probabilidade de default; evidências que destacam a importância dos efeitos indiretos que o superávit pode gerar sobre o juro doméstico. O terceiro ensaio analisou a dinâmica discreta da taxa de juros SELIC-meta definida nas reuniões do Comitê de Política Monetária (COPOM). Dois métodos foram utilizados para estudar a possibilidade de o COPOM reduzir/manter/aumentar a taxa de juros básica: probit binomial e probit multinomial. Os resultados mostraram que os desvios de inflação e o hiato do produto são variáveis relevantes para explicar as decisões do COPOM. O modelo probit binomial aplicado para os casos de aumento e redução da taxa SELIC-meta mostraram que a inclusão da variável fiscal gerou melhores resultados. Para o caso agregado, método probit multinomial, os resultados indicaram que a inclusão da variável fiscal combinada com a expectativa de inflação gerou os melhores resultados relativamente aos demais casos. Assim, a resposta do COPOM a resultados fiscais bem como às expectativas do mercado quanto à inflação demonstraram ser os sinais que devem ser observados pelo mercado. / This thesis presents three essays on monetary and fiscal policy of the current regimen of inflation targeting. The first essay searched to study an integration monetary-fiscal when determining an optimal rule of monetary policy with fiscal restriction, analyzing the effect of diverse preferences on the optimal rule in function of the alteration of the weights given for the deviations of the surplus primary as a fraction of GDP in relation to its established targets. The results show that the gotten optimal rule presents a negative reply of the interest rates to the shocks in the debtto- GDP ratio. Primary surplus still bigger would allow bigger reductions in the interest rates and proportional to the weights that this variable-objective would have in the function of social loss. Of the traditional point of view of the mechanism of transmission of the monetary policy, the positive reply of the interest rates to a real depreciation of the exchange and to a rise of the risk premium it would be kept. Therefore, the results suggest that the adoption of explicit targets for the primary surplus in percentage of the GDP has positive consequences on the optimal rule of monetary policy and for the reduction of the interest rates, as well as in the efficiency of the current instrument of monetary policy. The second essay searched to analyze the relation default risk through of the beta regression model, as well as the impacts that primary surplus can bring on the risk premium and, consequently, on the exchange rate. Of the point of view of the relation default risk, anchored in the model of Blanchard (2004/2005), the estimates based on the beta regression model for the four relations proposals in the study had presented significant and compatible signals with the theory. The interesting fact in the results referring to the period of the regimen of inflation targeting is that the estimates had indicated a negative and strong relation between the primary surplus/GDP and the probability of default, evidences that detaching the importance of the positive and indirect impact of the surplus in relation to the interests rate domestic. The third analyzes the discrete dynamics of the SELIC interest rates-target defined in the meetings of the Brazilian Monetary Policy Council (COPOM). Two methods were applied in order to study the possibility of COPOM to reduce/maintain/increase the interest rates: probit model and multinomial probit. It was verified that the deviations of inflation and the GDP gap must be considered importants variables to explain the COPOM’s decisions. The probit model was applied to the cases of the increases probabilies and reduces probabilities showing that the inclusion of a fiscal variable generates better results. To the aggregated case, multinominal probit method, the results indicates that the inclusion of a fiscal variables combined with the inflation expectations generates better results than other possibilities. So, the responses of COPOM to the fiscal results as well as inflation expectations were the reals signs to be considered for the market.
|
548 |
An empirical study for the application of the evidential reasoning rule to decision making in financial investmentGao, Quanjian January 2016 (has links)
The aim of this thesis is to explore the adaptability of the Evidential Reasoning (ER) Rule as a method to provide a useful supporting tool for helping investors make decisions on financial investments. Decision making in financial investment often involves conflicting information and subjective judgment of the investors. Accordingly, the ER Rule, extended from the original popular Evidential Reasoning algorithm and developed for MCDM (Multiple Criteria Decision Making), is particularly suited for handling conflicts in information and to allow for judgmental weighting on the sources of evidence. In order to do so, a specific EIA (Efficient Information Assessment) process modeled by the mass function of Dempster-Shafer Theory has been constructed such that the underlying architecture of the model satisfies the requirement of the ER rule. The fundamental concern is to define and assess “efficient information”. For this purpose, a process denoted the Efficient Information Assessment (EIA) is defined which applies the mass function of Dempster-Shafer theory. Any relevant information selected from an expert’s knowledge database is “efficient” if the data is fully in compliance with the requirement of the ER rule. The logical process of the EIA model proceeds with a set of portfolio strategies from the information recommended by top financial analysts. Then, as a result, the model enables the ER rule to make an evaluation of all strategies for helping investors make decisions. Experiments were carried out to back-test the investment strategy using data from the China Stock Market & Accounting Research (CSMAR) Database for the four-year period between 2009 and 2012. The data contained more than 270,000 reports from more than 4,600 financial analysts. The risk-adjusted average annual return of the strategy outperformed that of the CSI300 index by as much as 10.69% for an investment horizon of six months, with the p value from Student’s t-test as low as 0.02%. The EIA model serves as the first successful application adapting the ER Rule for a new and effective decision-making process in financial investment, and this work is the only empirical study applying the ER Rule to the opinions of financial analysts, to the best of my knowledge.
|
549 |
Análise de desempenho dos algoritmos Apriori e Fuzzy Apriori na extração de regras de associação aplicados a um Sistema de Detecção de Intrusos. / Performance analysis of algorithms Apriori and Fuzzy Apriori in association rules mining applied to a System for Intrusion Detection.Ricardo Ferreira Vieira de Castro 20 February 2014 (has links)
A extração de regras de associação (ARM - Association Rule Mining) de dados quantitativos tem sido pesquisa de grande interesse na área de mineração de dados. Com o crescente aumento das bases de dados, há um grande investimento na área de pesquisa na criação de algoritmos para melhorar o desempenho relacionado a quantidade de regras, sua relevância e a performance computacional. O algoritmo APRIORI, tradicionalmente usado na extração de regras de associação, foi criado originalmente para trabalhar com atributos categóricos. Geralmente, para usá-lo com atributos contínuos, ou quantitativos, é necessário transformar os atributos contínuos, discretizando-os e, portanto, criando categorias a partir dos intervalos discretos. Os métodos mais tradicionais de discretização produzem intervalos com fronteiras sharp, que podem subestimar ou superestimar elementos próximos dos limites das partições, e portanto levar a uma representação imprecisa de semântica. Uma maneira de tratar este problema é criar partições soft, com limites suavizados. Neste trabalho é utilizada uma partição fuzzy das variáveis contínuas, que baseia-se na teoria dos conjuntos fuzzy e transforma os atributos quantitativos em partições de termos linguísticos. Os algoritmos de mineração de regras de associação fuzzy (FARM - Fuzzy Association Rule Mining) trabalham com este princípio e, neste trabalho, o algoritmo FUZZYAPRIORI, que pertence a esta categoria, é utilizado. As regras extraídas são expressas em termos linguísticos, o que é mais natural e interpretável pelo raciocício humano. Os algoritmos APRIORI tradicional e FUZZYAPRIORI são comparado, através de classificadores associativos, baseados em regras extraídas por estes algoritmos. Estes classificadores foram aplicados em uma base de dados relativa a registros de conexões TCP/IP que destina-se à criação de um Sistema de Detecção de Intrusos. / The mining of association rules of quantitative data has been of great research interest in the area of data mining. With the increasing size of databases, there is a large investment in research in creating algorithms to improve performance related to the amount of rules, its relevance and computational performance. The APRIORI algorithm, traditionally used in the extraction of association rules, was originally created to work with categorical attributes. In order to use continuous attributes, it is necessary to transform the continuous attributes, through discretization, into categorical attributes, where each categorie corresponds to a discrete interval. The more traditional discretization methods produce intervals with sharp boundaries, which may underestimate or overestimate elements near the boundaries of the partitions, therefore inducing an inaccurate semantical representation. One way to address this problem is to create soft partitions with smoothed boundaries. In this work, a fuzzy partition of continuous variables, which is based on fuzzy set theory is used. The algorithms for mining fuzzy association rules (FARM - Fuzzy Association Rule Mining) work with this principle, and, in this work, the FUZZYAPRIORI algorithm is used. In this dissertation, we compare the traditional APRIORI and the FUZZYAPRIORI, through classification results of associative classifiers based on rules extracted by these algorithms. These classifiers were applied to a database of records relating to TCP / IP connections that aims to create an Intrusion Detection System.
|
550 |
Illegitimate Tax Avoidance and Rule XVI of Preliminary Title of Tax Code / La Elusión Fiscal y la Norma XVI del Título Preliminar del Código TributarioTarsitano, Alberto 12 April 2018 (has links)
The author analyzes a very important issue such as illegitimate tax avoidance. He begins by explaining the content of the concept of illegitimate tax avoidance, and also he points out the differences with other concepts like tax evasion and tax planning. Then, he comments the debate on the use of legal figures which doesn’t belong to Tax Law, in order to solve issue of illegitimate tax avoidance. Finally, he explains the scope and the application of the Peruvian general anti-avoidance rule stipulated in the Peruvian Tax Code. / El autor analiza un tema de suma importancia como es el de la elusión fiscal. Se comienza esclareciendo el contenido del concepto de elusión, diferenciándolo de otros conceptos como la evasión fiscal y la economía de opción. Luego, pasa a recoger y comentar el debate en torno al uso de figuras ajenas al Derecho Tributario para dar solución al problema de la elusión fiscal. Finalmente, pasa a explicar el alcance y aplicación de la cláusula general antielusiva peruana estipulada en la norma XVI del Título Preliminar del Código tributario.
|
Page generated in 0.0387 seconds