• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 116
  • 35
  • 28
  • 22
  • 21
  • 12
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 319
  • 60
  • 47
  • 36
  • 34
  • 28
  • 27
  • 26
  • 26
  • 25
  • 25
  • 24
  • 24
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Implementation av ett kunskapsbas system för rough set theory med kvantitativa mätningar / Implementation of a Rough Knowledge Base System Supporting Quantitative Measures

Andersson, Robin January 2004 (has links)
This thesis presents the implementation of a knowledge base system for rough sets [Paw92]within the logic programming framework. The combination of rough set theory with logic programming is a novel approach. The presented implementation serves as a prototype system for the ideas presented in [VDM03a, VDM03b]. The system is available at "http://www.ida.liu.se/rkbs". The presented language for describing knowledge in the rough knowledge base caters for implicit definition of rough sets by combining different regions (e.g. upper approximation, lower approximation, boundary) of other defined rough sets. The rough knowledge base system also provides methods for querying the knowledge base and methods for computing quantitative measures. We test the implemented system on a medium sized application example to illustrate the usefulness of the system and the incorporated language. We also provide performance measurements of the system.
62

Understanding the relationship of lumber yield and cutting bill requirements: a statistical approach

Buehlmann, Urs 13 October 1998 (has links)
Secondary hardwood products manufacturers have been placing heavy emphasis on lumber yield improvements in recent years. More attention has been on lumber grade and cutting technology rather than cutting bill design. However, understanding the underlying physical phenomena of cutting bill requirements and yield is essential to improve lumber yield in rough mills. This understanding could also be helpful in constructing a novel lumber yield estimation model. The purpose of this study was to advance the understanding of the phenomena relating cutting bill requirements and yield. The scientific knowledge gained was used to describe and quantify the effect of part length, width, and quantity on yield. Based on this knowledge, a statistics based approach to the lumber yield estimation problem was undertaken. Rip-first rough mill simulation techniques and statistical methods were used to attain the study's goals. To facilitate the statistical analysis of the relationship of cutting bill requirements and lumber yield, a theoretical concept, called cutting bill part groups, was developed. Part groups are a standardized way to describe cutting bill requirements. All parts required by a cutting bill are clustered within 20 individual groups according to their size. Each group's midpoint is the representative part size for all parts falling within an individual group. These groups are made such that the error from clustering is minimized. This concept allowed a decrease in the number of possible factors to account for in the analysis of the cutting bill requirements - lumber yield relationship. Validation of the concept revealed that the average error due to clustering parts is 1.82 percent absolute yield. An orthogonal, 220-11 fractional factorial design of resolution V was then used to determine the contribution of different part sizes to lumber yield. All 20 part sizes and 113 of a total of 190 unique secondary interactions were found to be significant (a = 0.05) in explaining the variability in yield observed. Parameter estimates of the part sizes and the secondary interactions were then used to specify the average yield contribution of each variable. Parts with size 17.50 inches in length and 2.50 inches in width were found to contribute the most to higher yield. The positive effect on yield due to parts smaller than 17.50 by 2.50 inches is less pronounced because their quantity is relatively small in an average cutting bill. Parts with size 72.50 by 4.25 inches, on the other hand, had the most negative influence on high yield. However, as further analysis showed, not only the individual parts required by a cutting bill, but also their interaction determines yield. By adding a sufficiently large number of smaller parts to a cutting bill that requires large parts to be cut, high levels of yield can be achieved. A novel yield estimation model using linear least squares techniques was derived based on the data from the fractional factorial design. This model estimates expected yield based on part quantities required by a standardized cutting bill. The final model contained all 20 part groups and their 190 unique secondary interactions. The adjusted R2 for this model was found to be 0.94. The model estimated 450 of the 512 standardized cutting bills used for its derivation to within one percent absolute yield. Standardized cutting bills, whose yield level differs by more than two percent can thus be classified correctly in 88 percent of the cases. Standardized cutting bills whose part quantities were tested beyond the established framework, i.e. the settings used for the data derivation, were estimated with an average error of 2.19 percent absolute yield. Despite the error observed, the model ranked the cutting bills as to their yield level quite accurately. However, cutting bills from actual rough mill operations, which were well beyond the framework of the model, were found to have an average estimation error of 7.62 percent. Nonetheless, the model classified four out of five cutting bills correctly as to their ranking of the yield level achieved. The least squares estimation model thus is a helpful tool in ranking cutting bills for their expected yield level. Overall, the model performs well for standardized cutting bills, but more work is needed to make the model generally applicable for cutting bills whose requirements are beyond the framework established in this study. / Ph. D.
63

Option pricing with Quadratic Rough Heston Model

Dushkina, Marina January 2023 (has links)
In this thesis, we study the quadratic rough Heston model and the corresponding simulation methods. We calibrate the model using real-world market data. We compare and implement the three commonly used schemes (Hybrid, Multifactor, and Multifactor hybrid). We calibrate the model using real-world market SPX data. To speed up calibration, we apply quasi-Monte Carlo methods. We study the effect of the various calibration parameters on the volatility smile.
64

Data Classification System Based on Combination Optimized Decision Tree : A Study on Missing Data Handling, Rough Set Reduction, and FAVC Set Integration / Dataklassificeringssystem baserat på kombinationsoptimerat beslutsträd : En studie om saknad datahantering, grov uppsättningsreduktion och FAVC-uppsättningsintegration

Lu, Xuechun January 2023 (has links)
Data classification is a novel data analysis technique that involves extracting valuable information with potential utility from databases. It has found extensive applications in various domains, including finance, insurance, government, education, transportation, and defense. There are several methods available for data classification, with decision tree algorithms being one of the most widely used. These algorithms are based on instance-based inductive learning and offer advantages such as rule extraction, low computational complexity, and the ability to highlight important decision attributes, leading to high classification accuracy. According to statistics, decision tree algorithms[1] are among the most widely utilized data mining algorithms. To address these challenges, a decision tree algorithm is employed to solve classification problems. However, the existing decision tree algorithm exhibits limitations such as low calculation efficiency and multi-valued[2] bias. Therefore, a data classification system based on an optimized decision tree algorithm written in Python and a data storage system based on PostgreSQL were developed. The proposed algorithm surpasses traditional classification algorithms in terms of dimensionality reduction, attribute selection, and scalability. Ultimately, a combined optimization decision tree classifier system is introduced, which exhibits superior performance compared to the widely used ID3[3] algorithm. The improved decision tree algorithm has both theoretical and practical significance for data mining applications. / Dataklassificering är en ny dataanalysteknik som innebär att man extraherar värdefull information med potentiell nytta från databaser. Den har hittat omfattande tillämpningar inom olika domäner, inklusive finans, försäkring, regering, utbildning, transport och försvar. Det finns flera metoder tillgängliga för dataklassificering, där beslutsträdsalgoritmer är en av de mest använda. Dessa algoritmer är baserade på instansbaserad induktiv inlärning och erbjuder fördelar som regelextraktion, låg beräkningskomplexitet och förmågan att lyfta fram viktiga beslutsattribut, vilket leder till hög klassificeringsnoggrannhet. Enligt statistik är beslutsträdsalgoritmer bland de mest använda datautvinningsalgoritmerna. För att hantera dessa utmaningar används en beslutsträdsalgoritm för att lösa klassificeringsproblem. Den befintliga beslutsträds-algoritmen uppvisar dock begränsningar såsom låg beräkningseffektivitet och flervärdig bias. Därför utvecklades ett dataklassificeringssystem baserat på en optimerad beslutsträdsalgoritm skriven i Python och ett datalagringssystem baserat på PostgreSQL. Den föreslagna algoritmen överträffar traditionella klassificeringsalgoritmer när det gäller dimensionsreduktion, attributval och skalbarhet. I slutändan introduceras ett kombinerat optimeringsbeslutsträd-klassificeringssystem, som uppvisar överlägsen prestanda jämfört med den allmänt använda ID3-algoritmen. Den förbättrade beslutsträdsalgoritmen har både teoretisk och praktisk betydelse för datautvinningstillämpningar.
65

Large eddy simulation for automotive vortical flows in ground effect

Schembri-Puglisevich, Lara January 2013 (has links)
Large Eddy Simulation (LES) is carried out using the Rolls-Royce Hydra CFD code in order to investigate and give further insight into highly turbulent, unsteady flow structures for automotive applications. LES resolves time dependent eddies that are modelled in the steady-state by Reynolds-Averaged Navier-Stokes (RANS) turbulence models. A standard Smagorinsky subgrid scale model is used to model the energy transfer between large and subgrid scales. Since Hydra is an unstructured algorithm, a variety of unstructured hexahedral, tetrahedral and hybrid grids are used for the different cases investigated. Due to the computational requirements of LES, the cases in this study replicate and analyse generic flow problems through simplified geometry, rather than modelling accurate race car geometry which would lead to infeasible calculations. The first case investigates the flow around a diffuser-equipped bluff body at an experimental Reynolds number of 1.01 times 10 to the power 6 based on model height and inlet velocity. LES is carried out on unstructured hexahedral grids of 10 million and 20 million nodes, with the latter showing improved surface pressure when compared to the experiments. Comparisons of velocity and vorticity between the LES and experiments at the diffuser exit plane show a good level of agreement. Flow visualisation of the vortices in the diffuser region and behind the model from the mean and instantaneous flow attempts to explain the relation or otherwise between the two. The main weakness of the simulation was the late laminar to turbulent transition in the underbody region. The size of the domain and high experimental Reynolds number make this case very challenging. After the challenges faced by the diffuser-equipped bluff body, the underbody region is isolated so that increased grid refinement can be achieved in this region and the calculation is run at a Reynolds number of 220, 000, reducing the computational requirement from the previous case. A vortex generator mounted onto a flat underbody at an onset angle to the flow is modelled to generate vortices that extend along the length of the underbody and its interaction with the ground is analysed. Since the vortex generator resembles a slender wing with an incidence to the flow, a delta wing study is presented as a preliminary step since literature on automotive vortex generators in ground effect is scarce. Results from the delta wing study which is run at an experimental Reynolds number of 1.56 times 10 to the power 6 are in very good agreement with previous experiments and Detached Eddy Simulation (DES) studies, giving improved detail and understanding. Axial velocity and vorticity contours at several chordwise stations show that the leading edge vortices are predicted very well by a 20 million node tetrahedral grid. Sub-structures that originate from the leading edge of the wing and form around the core of the leading edge vortex are also captured. Large Eddy Simulation for the flow around an underbody vortex generator over a smooth ground and a rough ground is presented. A hexahedral grid of 40 million nodes is used for the smooth ground case, whilst a 48 million node hybrid grid was generated for the rough ground case so that the detailed geometry near the ground could be captured by tetrahedral cells. The geometry for the rough surface is modelled by scanning a tarmac surface to capture the cavities and protrusions in the ground. This is the first time that a rough surface representing a tarmac road has been computed in a CFD simulation, so that its effect on vortex decay can be studied. Flow visualisation of the instantaneous flow has shown strong interaction with the ground and the results from this study have given an initial understanding in this area.
66

On the use of the finite element method for the modeling of acoustic scattering from one-dimensional rough fluid-poroelastic interfaces

Bonomo, Anthony Lucas 02 October 2014 (has links)
A poroelastic finite element formulation originally derived for modeling porous absorbing material in air is adapted to the problem of acoustic scattering from a poroelastic seafloor with a one-dimensional randomly rough interface. The developed formulation is verified through calculation of the plane wave reflection coefficient for the case of a flat surface and comparison with the well known analytical solution. The scattering strengths are then obtained for two different sets of material properties and roughness parameters using a Monte Carlo approach. These numerical results are compared with those given by three analytic scattering models---perturbation theory, the Kirchhoff approximation, and the small-slope approximation---and from those calculated using two finite element formulations where the sediment is modeled as an acoustic fluid. / text
67

Rule-Based Approaches for Large Biological Datasets Analysis : A Suite of Tools and Methods

Kruczyk, Marcin January 2013 (has links)
This thesis is about new and improved computational methods to analyze complex biological data produced by advanced biotechnologies. Such data is not only very large but it also is characterized by very high numbers of features. Addressing these needs, we developed a set of methods and tools that are suitable to analyze large sets of data, including next generation sequencing data, and built transparent models that may be interpreted by researchers not necessarily expert in computing. We focused on brain related diseases. The first aim of the thesis was to employ the meta-server approach to finding peaks in ChIP-seq data. Taking existing peak finders we created an algorithm that produces consensus results better than any single peak finder. The second aim was to use supervised machine learning to identify features that are significant in predictive diagnosis of Alzheimer disease in patients with mild cognitive impairment. This experience led to a development of a better feature selection method for rough sets, a machine learning method.  The third aim was to deepen the understanding of the role that STAT3 transcription factor plays in gliomas. Interestingly, we found that STAT3 in addition to being an activator is also a repressor in certain glioma rat and human models. This was achieved by analyzing STAT3 binding sites in combination with epigenetic marks. STAT3 regulation was determined using expression data of untreated cells and cells after JAK2/STAT3 inhibition. The four papers constituting the thesis are preceded by an exposition of the biological, biotechnological and computational background that provides foundations for the papers. The overall results of this thesis are witness of the mutually beneficial relationship played by Bioinformatics in modern Life Sciences and Computer Science.
68

Rough path properties for local time of symmetric alpha stable processes

Wang, Qingfeng January 2012 (has links)
No description available.
69

Automatic message annotation and semantic interface for context aware mobile computing

Al-Sultany, Ghaidaa Abdalhussein Billal January 2012 (has links)
In this thesis, the concept of mobile messaging awareness has been investigated by designing and implementing a framework which is able to annotate the short text messages with context ontology for semantic reasoning inference and classification purposes. The annotated metadata of text message keywords are identified and annotated with concepts, entities and knowledge that drawn from ontology without the need of learning process and the proposed framework supports semantic reasoning based messages awareness for categorization purposes. The first stage of the research is developing the framework of facilitating mobile communication with short text annotated messages (SAMS), which facilitates annotating short text message with part of speech tags augmented with an internal and external metadata. In the SAMS framework the annotation process is carried out automatically at the time of composing a message. The obtained metadata is collected from the device’s file system and the message header information which is then accumulated with the message’s tagged keywords to form an XML file, simultaneously. The significance of annotation process is to assist the proposed framework during the search and retrieval processes to identify the tagged keywords and The Semantic Web Technologies are utilised to improve the reasoning mechanism. Later, the proposed framework is further improved “Contextual Ontology based Short Text Messages reasoning (SOIM)”. SOIM further enhances the search capabilities of SAMS by adopting short text message annotation and semantic reasoning capabilities with domain ontology as Domain ontology is modeled into set of ontological knowledge modules that capture features of contextual entities and features of particular event or situation. Fundamentally, the framework SOIM relies on the hierarchical semantic distance to compute an approximated match degree of new set of relevant keywords to their corresponding abstract class in the domain ontology. Adopting contextual ontology leverages the framework performance to enhance the text comprehension and message categorization. Fuzzy Sets and Rough Sets theory have been integrated with SOIM to improve the inference capabilities and system efficiency. Since SOIM is based on the degree of similarity to choose the matched pattern to the message, the issue of choosing the best-retrieved pattern has arisen during the stage of decision-making. Fuzzy reasoning classifier based rules that adopt the Fuzzy Set theory for decision making have been applied on top of SOIM framework in order to increase the accuracy of the classification process with clearer decision. The issue of uncertainty in the system has been addressed by utilising the Rough Sets theory, in which the irrelevant and indecisive properties which affect the framework efficiency negatively have been ignored during the matching process.
70

The expected signature of a stochastic process

Ni, Hao January 2012 (has links)
The signature of the path provides a top down description of a path in terms of its eects as a control. It is a group-like element in the tensor algebra and is an essential object in rough path theory. When the path is random, the linear independence of the signatures of different paths leads one to expect, and it has been proved in simple cases, that the expected signature would capture the complete law of this random variable. It becomes of great interest to be able to compute examples of expected signatures. In this thesis, we aim to compute the expected signature of various stochastic process solved by a PDE approach. We consider the case for an Ito diffusion process up to a fixed time, and the case for the Brownian motion up to the first exit time from a domain. We manage to derive the PDE of the expected signature for both cases, and find that this PDE system could be solved recursively. Some specific examples are included herein as well, e.g. Ornstein-Uhlenbeck (OU) processes, Brownian motion and Levy area coupled with Brownian motion.

Page generated in 0.0891 seconds