Spelling suggestions: "subject:"heighted"" "subject:"eighted""
121 |
Expressiveness and Decidability of Weighted Automata and Weighted LogicsPaul, Erik 19 October 2020 (has links)
Automata theory, one of the main branches of theoretical computer science, established its roots in the middle of the 20th century. One of its most fundamental concepts is that of a finite automaton, a basic yet powerful model of computation. In essence, finite automata provide a method to finitely represent possibly infinite sets of strings. Such a set of strings is also called a language, and the languages which can be described by finite automata are known as regular languages. Owing to their versatility, regular languages have received a great deal of attention over the years. Other formalisms were shown to be expressively equivalent to finite automata, most notably regular grammars, regular expressions, and monadic second order (MSO) logic. To increase expressiveness, the fundamental idea underlying finite automata and regular languages was also extended to describe not only languages of strings, or words, but also of infinite words by Büchi and Muller, finite trees by Doner and Thatcher and Wright, infinite trees by Rabin, nested words by Alur and Madhusudan, and pictures by Blum and Hewitt, just to name a few examples. In a parallel line of development, Schützenberger introduced weighted automata which allow the description of quantitative properties of regular languages. In subsequent works, many of these descriptive formalisms and extensions were combined and their relationships investigated. For example, weighted regular expressions and weighted logics have been developed as well as regular expressions for trees and pictures, regular grammars for trees, pictures, and nested words, and logical characterizations for regular languages of trees, pictures, and nested words.
In this work, we focus on two of these extensions and their relationship, namely weighted automata and weighted logics. Just as the classical Büchi-Elgot-Trakhtenbrot Theorem established the coincidence of regular languages with languages definable in monadic second order logic, weighted automata have been shown to be expressively equivalent to a specific fragment of a weighted monadic second order logic by Droste and Gastin. We explore several aspects of weighted automata and of this weighted logic. More precisely, the thesis considers the following topics.
In the first part, we extend the classical Feferman-Vaught Theorem to the weighted setting. The Feferman-Vaught Theorem is one of the fundamental theorems in model theory. The theorem describes how the computation of the truth value of a first order sentence in a generalized product of relational structures can be reduced to the computation of truth values of first order sentences in the contributing structures and the evaluation of an MSO sentence in the index structure. The theorem itself has a long-standing history. It builds upon work of Mostowski, and was shown in subsequent works to hold true for MSO logic. Here, we show that under appropriate assumptions, the Feferman-Vaught Theorem also holds true for a weighted MSO logic with arbitrary commutative semirings as weight structure.
In the second part, we lift four decidability results from max-plus word automata to max-plus tree automata. Max-plus word and tree automata are weighted automata over the max-plus semiring and assign real numbers to words or trees, respectively. We show that, like for max-plus word automata, the equivalence, unambiguity, and sequentiality problems are decidable for finitely ambiguous max-plus tree automata, and that the finite sequentiality problem is decidable for unambiguous max-plus tree automata.
In the last part, we develop a logic which is expressively equivalent to quantitative monitor automata. Introduced very recently by Chatterjee, Henzinger, and Otop, quantitative monitor automata are an automaton model operating on infinite words. Quantitative monitor automata possess several interesting features. They are expressively equivalent to a subclass of nested weighted automata, an automaton model which for many valuation functions has decidable emptiness and universality problems. Also, quantitative monitor automata are more expressive than weighted Büchi-automata and their extension with valuation functions. We introduce a new logic which we call monitor logic and show that it is expressively equivalent to quantitative monitor automata.
|
122 |
Implementation of Anomaly Detection on a Time-series Temperature Data setNovacic, Jelena, Tokhi, Kablai January 2019 (has links)
Aldrig har det varit lika aktuellt med hållbar teknologi som idag. Behovet av bättre miljöpåverkan inom alla områden har snabbt ökat och energikonsumtionen är ett av dem. En enkel lösning för automatisk kontroll av energikonsumtionen i smarta hem är genom mjukvara. Med dagens IoT teknologi och maskinlärningsmodeller utvecklas den mjukvarubaserade hållbara livsstilen allt mer. För att kontrollera ett hushålls energikonsumption måste plötsligt avvikande beteenden detekteras och regleras för att undvika onödig konsumption. Detta examensarbete använder en tidsserie av temperaturdata för att implementera detektering av anomalier. Fyra modeller implementerades och testades; en linjär regressionsmodell, Pandas EWM funktion, en EWMA modell och en PEWMA modell. Varje modell testades genom att använda dataset från nio olika lägenheter, från samma tidsperiod. Därefter bedömdes varje modell med avseende på Precision, Recall och F-measure, men även en ytterligare bedömning gjordes för linjär regression med R^2-score. Resultaten visar att baserat på noggrannheten hos varje modell överträffade PEWMA de övriga modellerna. EWMA modeller var något bättre än den linjära regressionsmodellen, följt av Pandas egna EWM modell. / Today's society has become more aware of its surroundings and the focus has shifted towards green technology. The need for better environmental impact in all areas is rapidly growing and energy consumption is one of them. A simple solution for automatically controlling the energy consumption of smart homes is through software. With today's IoT technology and machine learning models the movement towards software based ecoliving is growing. In order to control the energy consumption of a household, sudden abnormal behavior must be detected and adjusted to avoid unnecessary consumption. This thesis uses a time-series data set of temperature data for implementation of anomaly detection. Four models were implemented and tested; a Linear Regression model, Pandas EWM function, an exponentially weighted moving average (EWMA) model and finally a probabilistic exponentially weighted moving average (PEWMA) model. Each model was tested using data sets from nine different apartments, from the same time period. Then an evaluation of each model was conducted in terms of Precision, Recall and F-measure, as well as an additional evaluation for Linear Regression, using R^2 score. The results of this thesis show that in terms of accuracy, PEWMA outperformed the other models. The EWMA model was slightly better than the Linear Regression model, followed by the Pandas EWM model.
|
123 |
Primary central nervous system lymphoma and glioblastoma: differentiation using dynamic susceptibility-contrast perfusion-weighted imaging, diffusion-weighted imaging, and 18F-fluorodeoxyglucose positron emission tomography / 中枢神経系原発リンパ腫と膠芽腫:灌流強調画像、拡散強調画像、FDG-PETを用いた鑑別Nakajima, Satoshi 25 January 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(医学) / 甲第19403号 / 医博第4054号 / 新制||医||1012(附属図書館) / 32428 / 京都大学大学院医学研究科医学専攻 / (主査)教授 前川 平, 教授 平岡 眞寛, 教授 羽賀 博典 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
|
124 |
On the discovery of relevant structures in dynamic and heterogeneous dataPreti, Giulia 22 October 2019 (has links)
We are witnessing an explosion of available data coming from a huge amount of sources and domains, which is leading to the creation of datasets larger and larger, as well as richer and richer.
Understanding, processing, and extracting useful information from those datasets requires specialized algorithms that take into consideration both the dynamism and the heterogeneity of the data they contain.
Although several pattern mining techniques have been proposed in the literature, most of them fall short in providing interesting structures when the data can be interpreted differently from user to user, when it can change from time to time, and when it has different representations.
In this thesis, we propose novel approaches that go beyond the traditional pattern mining algorithms, and can effectively and efficiently discover relevant structures in dynamic and heterogeneous settings.
In particular, we address the task of pattern mining in multi-weighted graphs, pattern mining in dynamic graphs, and pattern mining in heterogeneous temporal databases.
In pattern mining in multi-weighted graphs, we consider the problem of mining patterns for a new category of graphs called emph{multi-weighted graphs}. In these graphs, nodes and edges can carry multiple weights that represent, for example, the preferences of different users or applications, and that are used to assess the relevance of the patterns.
We introduce a novel family of scoring functions that assign a score to each pattern based on both the weights of its appearances and their number, and that respect the anti-monotone property, pivotal for efficient implementations.
We then propose a centralized and a distributed algorithm that solve the problem both exactly and approximately. The approximate solution has better scalability in terms of the number of edge weighting functions, while achieving good accuracy in the results found.
An extensive experimental study shows the advantages and disadvantages of our strategies, and proves their effectiveness.
Then, in pattern mining in dynamic graphs, we focus on the particular task of discovering structures that are both well-connected and correlated over time, in graphs where nodes and edges can change over time.
These structures represent edges that are topologically close and exhibit a similar behavior of appearance and disappearance in the snapshots of the graph.
To this aim, we introduce two measures for computing the density of a subgraph whose edges change in time, and a measure to compute their correlation.
The density measures are able to detect subgraphs that are silent in some periods of time but highly connected in the others, and thus they can detect events or anomalies happened in the network.
The correlation measure can identify groups of edges that tend to co-appear together, as well as edges that are characterized by similar levels of activity.
For both variants of density measure, we provide an effective solution that enumerates all the maximal subgraphs whose density and correlation exceed given minimum thresholds, but can also return a more compact subset of representative subgraphs that exhibit high levels of pairwise dissimilarity.
Furthermore, we propose an approximate algorithm that scales well with the size of the network, while achieving a high accuracy.
We evaluate our framework with an extensive set of experiments on both real and synthetic datasets, and compare its performance with the main competitor algorithm.
The results confirm the correctness of the exact solution, the high accuracy of the approximate, and the superiority of our framework over the existing solutions.
In addition, they demonstrate the scalability of the framework and its applicability to networks of different nature.
Finally, we address the problem of entity resolution in heterogeneous temporal data-ba-se-s, which are datasets that contain records that give different descriptions of the status of real-world entities at different periods of time, and thus are characterized by different sets of attributes that can change over time.
Detecting records that refer to the same entity in such scenario requires a record similarity measure that takes into account the temporal information and that is aware of the absence of a common fixed schema between the records.
However, existing record matching approaches either ignore the dynamism in the attribute values of the records, or assume that all the records share the same set of attributes throughout time.
In this thesis, we propose a novel time-aware schema-agnostic similarity measure for temporal records to find pairs of matching records, and integrate it into an exact and an approximate algorithm.
The exact algorithm can find all the maximal groups of pairwise similar records in the database.
The approximate algorithm, on the other hand, can achieve higher scalability with the size of the dataset and the number of attributes, by relying on a technique called meta-blocking. This algorithm can find a good-quality approximation of the actual groups of similar records, by adopting an effective and efficient clustering algorithm.
|
125 |
Quantitative Investment Strategies on the Swedish Stock MarketKnutsson, Jonatan, Telešova, Gabija January 2023 (has links)
This thesis explores the implementation of three quantitative investment strategies – the dividend yield strategy, the EV/EBITDA strategy, and the momentum strategy – within the Swedish stock market using Equal-Weighted Portfolios (EWP) and Value-Weighted Portfolios(VWP). The analysis is based on backtesting during the periods 2009 − 2022, 2001 − 2022, and 1992 − 2022, for each strategy respectively. The research aims to assess the risk-adjusted returns of these strategies and compare the performance of the EWP and the VWP. The results indicate that all the tested quantitative investment strategies beat the market. Moreover, the VWP achieve higher annual returns compared to the EWP. However, when considering risk-adjusted returns, the EWP generally demonstrate superior performance. Specifically, the EWP incorporating momentum monthly rebalancing exhibit the largest risk-adjusted returns.
|
126 |
Comparing different exchange traded funds in South Africa based on volatility and returns / Wiehan Henri PeyperPeyper, Wiehan Henri January 2014 (has links)
Increasing sophistication of exchange traded fund (ETF) indexation methods required that a comparison be drawn between various methodologies. A performance and risk evaluation of four pre-selected ETF indexation categories were conducted to establish the diversification benefits that each contain. Fundamentally weighted, equally weighted and leveraged ETFs were compared to traditional market capitalisation weighted ETFs on the basis of risk and return. While a literature review presented the theory on ETFs and the various statistical measures used for this study, the main findings were obtained empirically from a sample of South African and American ETFs. Several risk-adjusted performance measures were employed to assess the risk and return of each indexation category. Special emphasis was placed on the Omega ratio due to the unique interpretation of the return series‟ distribution characteristics. The risk of each ETF category was evaluated using the exponentially weighted moving average (EWMA), while the diversification potential was determined by means of a regression analysis based on the single index model. According to the findings, fundamentally weighted ETFs perform the best during an upward moving market when compared by standard risk-adjusted performance measures. However, the Omega ratio analysis revealed the inherent unsystematic risk of alternatively indexed ETFs and ranked market capitalisation weighted ETFs as the best performing category. Equal weighted ETFs delivered consistently poor rankings, while leveraged ETFs exhibited a high level of risk associated with the amplified returns of this category. The diversification measurement concurred with the Omega ratio analysis and highlighted the market capitalisation weighted ETFs to be the most diversified ETFs in the selection. Alternatively indexed ETFs consequently deliver higher absolute returns by incurring greater unsystematic risk, while simultaneously reducing the level of diversification in the fund. / MCom (Risk Management), North-West University, Vaal Triangle Campus, 2014
|
127 |
Comparing different exchange traded funds in South Africa based on volatility and returns / Wiehan Henri PeyperPeyper, Wiehan Henri January 2014 (has links)
Increasing sophistication of exchange traded fund (ETF) indexation methods required that a comparison be drawn between various methodologies. A performance and risk evaluation of four pre-selected ETF indexation categories were conducted to establish the diversification benefits that each contain. Fundamentally weighted, equally weighted and leveraged ETFs were compared to traditional market capitalisation weighted ETFs on the basis of risk and return. While a literature review presented the theory on ETFs and the various statistical measures used for this study, the main findings were obtained empirically from a sample of South African and American ETFs. Several risk-adjusted performance measures were employed to assess the risk and return of each indexation category. Special emphasis was placed on the Omega ratio due to the unique interpretation of the return series‟ distribution characteristics. The risk of each ETF category was evaluated using the exponentially weighted moving average (EWMA), while the diversification potential was determined by means of a regression analysis based on the single index model. According to the findings, fundamentally weighted ETFs perform the best during an upward moving market when compared by standard risk-adjusted performance measures. However, the Omega ratio analysis revealed the inherent unsystematic risk of alternatively indexed ETFs and ranked market capitalisation weighted ETFs as the best performing category. Equal weighted ETFs delivered consistently poor rankings, while leveraged ETFs exhibited a high level of risk associated with the amplified returns of this category. The diversification measurement concurred with the Omega ratio analysis and highlighted the market capitalisation weighted ETFs to be the most diversified ETFs in the selection. Alternatively indexed ETFs consequently deliver higher absolute returns by incurring greater unsystematic risk, while simultaneously reducing the level of diversification in the fund. / MCom (Risk Management), North-West University, Vaal Triangle Campus, 2014
|
128 |
Volumetric T-spline Construction for Isogeometric Analysis – Feature Preservation, Weighted Basis and Arbitrary DegreeLiu, Lei 01 September 2015 (has links)
Constructing spline models for isogeometric analysis is important in integrating design and analysis. Converting designed CAD (Computer Aided Design) models with B-reps to analysis-suitable volumetric T-spline is fundamental for the integration. In this thesis, we work on two directions to achieve this: (a) using Boolean operations and skeletons to build polycubes for feature-preserving high-genus volumetric T-spline construction; and (b) developing weighted T-splines with arbitrary degree for T-spline surface and volume modeling which can be used for analysis. In this thesis, we first develop novel algorithms to build feature-preserving polycubes for volumetric T-spline construction. Then a new type of T-spline named the weighted T-spline with arbitrary degree is defined. It is further used in converting CAD models to analysis-suitable volumetric T-splines. An algorithm is first developed to use Boolean operations in CSG (Constructive Solid Geometry) to generate polycubes robustly, then the polycubes are used to generate volumetric rational solid T-splines. By solving a harmonic field with proper boundary conditions, the input surface is automatically decomposed into regions that are classified into topologically either a cube or a torus. Two Boolean operations, union and difference, are performed with the primitives and polycubes are generated by parametric mapping. With polycubes, octree subdivision is carried out to obtain a volumetric T-mesh. The obtained T-spline surface is C2-continuous everywhere except the local region surrounding irregular nodes, where the surface continuity is elevated from C0 to G1. B´ezier elements are extracted from the constructed solid T-spline models, which are further used in isogeometric analysis. The Boolean operations preserve the topology of the models inherited from design and can generate volumetric T-spline models with better quality. Furthermore, another algorithm is developed which uses skeleton as a guidance to the polycube construction. From the skeleton of the input model, initial cubes in the interior are first constructed. By projecting corners of interior cubes onto the surface and generating a new layer of boundary cubes, the entire interior domain is split into different cubic regions. With the splitting result, octree subdivision is performed to obtain T-spline control mesh or T-mesh. Surface features are classified into three groups: open curves, closed curves and singularity features. For features without introducing new singularities like open or closed curves, we preserve them by aligning to the parametric lines during subdivision, performing volumetric parameterization from frame field, or modifying the skeleton. For features introducing new singularities, we design templates to handle them. With a valid T-mesh, we calculate rational trivariate T-splines and extract B´ezier elements for isogeometric analysis. Weighted T-spline basis functions are designed to satisfy partition of unity and linear independence. The weighted T-spline is proved to be analysis-suitable. Compared to standard T-splines, weighted T-splines have less geometrical constraint and can decrease the number of control points significantly. Trimmed NURBS surfaces of CAD models are reparameterized with weighted T-splines by a new edge interval extension algorithm, with bounded surface error introduced. With knot interval duplication, weighted T-splines are used to deal with extraordinary nodes. With B´ezier coefficient optimization, the surface continuity is elevated from C0 to G1 for the one-ring neighborhood elements. Parametric mapping and sweeping methods are developed to construct volumetric weighted T-splines for isogeometric analysis. Finally, we develop an algorithm to construct arbitrary degree T-splines. The difference between odd degree and even degree T-splines are studied in detail. The methods to extract knot intervals, calculate new weights to handle extraordinary nodes, and extract B´ezier elements for analysis are investigated with arbitrary degrees. Hybrid degree weighted Tspline is generated at designated region with basis functions of different degrees, for the purpose of performing local p-refinement. We also study the convergence rate for T-spline models of different degrees, showing that hybrid degree weighted T-splines have better performance after p-refinement. In summary, we develop novel methods to construct volumetric T-splines based on polycube and sweeping methods. Arbitrary degree weighted T-spline is proposed, with proved analysis-suitable properties. Weighted T-spline basis functions are used to reparameterize trimmed NURBS surfaces, handling extraordinary nodes, based on which surface and volumetric weighted T-spline models are constructed for isogeometric analysis.
|
129 |
Measuring reputational risk in the South African banking sectorFerreira, Susara January 2015 (has links)
With few previous data and literature based on the South African banking sector, the key aim of this study was to contribute further results concerning the effect of operational loss events on the reputation of South African banks. The main distinction between this study and previous empirical research is that a small sample of South African banks listed on the JSE, between 2000 and 2014 was used. Insurance companies fell outside the scope of the study. The study primarily focused on identifying reputational risk among Regal Treasury Bank, Saambou Bank, African Bank and Standard Bank. The events announced by these banks occurred between 2000 and 2014. The precise date of the announcement of the operational events was also determined. Stock price data were collected for those banks that had unanticipated operational loss announcements (i.e. the event). Microsoft Excel models applied to the reputational loss as the difference between the operational loss announcement and the loss in the stock returns of the selected banks. The results indicated significant negative abnormal returns on the announcement day for three of the four banks. For one of the banks it was assumed that the operational loss was not significant enough to cause reputational risk.
The event methodology similar to previous literature, furthermore examined the behaviour of return volatility after specific operational loss events using the sample of banks. The study further aimed at making two contributions. Firstly, to analyse return volatility after operational loss announcements had been made among South African banks, and secondly, to compare the sample of affected banks with un-affected banks to further identify whether these events spilled over into the banking industry and the market. The volatility of these four banks were compared to three un-affected South African banks. The results found that the operational loss events for Regal Treasury Bank and Saambou Bank had no influence on the unaffected banks. However the operational loss events for African Bank and Standard Bank influenced the sample of unaffected banks and the Bank Index, indicating systemic risk.
|
130 |
Measuring reputational risk in the South African banking sectorFerreira, Susara January 2015 (has links)
With few previous data and literature based on the South African banking sector, the key aim of this study was to contribute further results concerning the effect of operational loss events on the reputation of South African banks. The main distinction between this study and previous empirical research is that a small sample of South African banks listed on the JSE, between 2000 and 2014 was used. Insurance companies fell outside the scope of the study. The study primarily focused on identifying reputational risk among Regal Treasury Bank, Saambou Bank, African Bank and Standard Bank. The events announced by these banks occurred between 2000 and 2014. The precise date of the announcement of the operational events was also determined. Stock price data were collected for those banks that had unanticipated operational loss announcements (i.e. the event). Microsoft Excel models applied to the reputational loss as the difference between the operational loss announcement and the loss in the stock returns of the selected banks. The results indicated significant negative abnormal returns on the announcement day for three of the four banks. For one of the banks it was assumed that the operational loss was not significant enough to cause reputational risk.
The event methodology similar to previous literature, furthermore examined the behaviour of return volatility after specific operational loss events using the sample of banks. The study further aimed at making two contributions. Firstly, to analyse return volatility after operational loss announcements had been made among South African banks, and secondly, to compare the sample of affected banks with un-affected banks to further identify whether these events spilled over into the banking industry and the market. The volatility of these four banks were compared to three un-affected South African banks. The results found that the operational loss events for Regal Treasury Bank and Saambou Bank had no influence on the unaffected banks. However the operational loss events for African Bank and Standard Bank influenced the sample of unaffected banks and the Bank Index, indicating systemic risk.
|
Page generated in 0.044 seconds