• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 66
  • 66
  • 66
  • 17
  • 14
  • 12
  • 11
  • 11
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A Financial Optimization Approach to Quantitative Analysis of Long Term Government Debt Management in Sweden

Grill, Tomas, Östberg, Håkan January 2003 (has links)
<p>The Swedish National Debt Office (SNDO) is the Swedish Government’s financial administration. It has several tasks and the main one is to manage the central government’s debt in a way that minimizes the cost with due regard to risk. The debt management problem is to choose currency composition and maturity profile - a problem made difficult because of the many stochastic factors involved. </p><p>The SNDO has created a simulation model to quantitatively analyze different aspects of this problem by evaluating a set of static strategies in a great number of simulated futures. This approach has a number of drawbacks, which might be handled by using a financial optimization approach based on Stochastic Programming. </p><p>The objective of this master’s thesis is thus to apply financial optimization on the Swedish government’s strategic debt management problem, using the SNDO’s simulation model to generate scenarios, and to evaluate this approach against a set of static strategies in fictitious future macroeconomic developments. </p><p>In this report we describe how the SNDO’s simulation model is used along with a clustering algorithm to form future scenarios, which are then used by an optimization model to find an optimal decision regarding the debt management problem. </p><p>Results of the evaluations show that our optimization approach is expected to have a lower average annual real cost, but with somewhat higher risk, than a set of static comparison strategies in a simulated future. These evaluation results are based on a risk preference set by ourselves, since the government has not expressed its risk preference quantitatively. We also conclude that financial optimization is applicable on the government debt management problem, although some work remains before the method can be incorporated into the strategic work of the SNDO.</p>
52

A Financial Optimization Approach to Quantitative Analysis of Long Term Government Debt Management in Sweden

Grill, Tomas, Östberg, Håkan January 2003 (has links)
The Swedish National Debt Office (SNDO) is the Swedish Government’s financial administration. It has several tasks and the main one is to manage the central government’s debt in a way that minimizes the cost with due regard to risk. The debt management problem is to choose currency composition and maturity profile - a problem made difficult because of the many stochastic factors involved. The SNDO has created a simulation model to quantitatively analyze different aspects of this problem by evaluating a set of static strategies in a great number of simulated futures. This approach has a number of drawbacks, which might be handled by using a financial optimization approach based on Stochastic Programming. The objective of this master’s thesis is thus to apply financial optimization on the Swedish government’s strategic debt management problem, using the SNDO’s simulation model to generate scenarios, and to evaluate this approach against a set of static strategies in fictitious future macroeconomic developments. In this report we describe how the SNDO’s simulation model is used along with a clustering algorithm to form future scenarios, which are then used by an optimization model to find an optimal decision regarding the debt management problem. Results of the evaluations show that our optimization approach is expected to have a lower average annual real cost, but with somewhat higher risk, than a set of static comparison strategies in a simulated future. These evaluation results are based on a risk preference set by ourselves, since the government has not expressed its risk preference quantitatively. We also conclude that financial optimization is applicable on the government debt management problem, although some work remains before the method can be incorporated into the strategic work of the SNDO.
53

A Systems Biology Approach to Develop Models of Signal Transduction Pathways

Huang, Zuyi 2010 August 1900 (has links)
Mathematical models of signal transduction pathways are characterized by a large number of proteins and uncertain parameters, yet only a limited amount of quantitative data is available. The dissertation addresses this problem using two different approaches: the first approach deals with a model simplification procedure for signaling pathways that reduces the model size but retains the physical interpretation of the remaining states, while the second approach deals with creating rich data sets by computing transcription factor profiles from fluorescent images of green-fluorescent-protein (GFP) reporter cells. For the first approach a model simplification procedure for signaling pathway models is presented. The technique makes use of sensitivity and observability analysis to select the retained proteins for the simplified model. The presented technique is applied to an IL-6 signaling pathway model. It is found that the model size can be significantly reduced and the simplified model is able to adequately predict the dynamics of key proteins of the signaling pathway. An approach for quantitatively determining transcription factor profiles from GFP reporter data is developed as the second major contribution of this work. The procedure analyzes fluorescent images to determine fluorescence intensity profiles using principal component analysis and K-means clustering, and then computes the transcription factor concentration from the fluorescence intensity profiles by solving an inverse problem involving a model describing transcription, translation, and activation of green fluorescent proteins. Activation profiles of the transcription factors NF-κB, nuclear STAT3, and C/EBPβ are obtained using the presented approach. The data for NF-κB is used to develop a model for TNF-α signal transduction while the data for nuclear STAT3 and C/EBPβ is used to verify the simplified IL-6 model. Finally, an approach is developed to compute the distribution of transcription factor profiles among a population of cells. This approach consists of an algorithm for identifying individual fluorescent cells from fluorescent images, and an algorithm to compute the distribution of transcription factor profiles from the fluorescence intensity distribution by solving an inverse problem. The technique is applied to experimental data to derive the distribution of NF-κB concentrations from fluorescent images of a NF-κB GFP reporter system.
54

Decision Making System Algorithm On Menopause Data Set

Bacak, Hikmet Ozge 01 September 2007 (has links) (PDF)
Multiple-centered clustering method and decision making system algorithm on menopause data set depending on multiple-centered clustering are described in this study. This method consists of two stages. At the first stage, fuzzy C-means (FCM) clustering algorithm is applied on the data set under consideration with a high number of cluster centers. As the output of FCM, cluster centers and membership function values for each data member is calculated. At the second stage, original cluster centers obtained in the first stage are merged till the new numbers of clusters are reached. Merging process relies upon a &ldquo / similarity measure&rdquo / between clusters defined in the thesis. During the merging process, the cluster center coordinates do not change but the data members in these clusters are merged in a new cluster. As the output of this method, therefore, one obtains clusters which include many cluster centers. In the final part of this study, an application of the clustering algorithms &ndash / including the multiple centered clustering method &ndash / a decision making system is constructed using a special data on menopause treatment. The decisions are based on the clusterings created by the algorithms already discussed in the previous chapters of the thesis. A verification of the decision making system / v decision aid system is done by a team of experts from the Department of Department of Obstetrics and Gynecology of Hacettepe University under the guidance of Prof. Sinan Beksa&ccedil / .
55

Assessment of Machine Learning Applied to X-Ray Fluorescence Core Scan Data from the Zinkgruvan Zn-Pb-Ag Deposit, Bergslagen, Sweden

Simán, Frans Filip January 2020 (has links)
Lithological core logging is a subjective and time consuming endeavour which could possibly be automated, the question is if and to what extent this automation would affect the resulting core logs. This study presents a case from the Zinkgruvan Zn-Pb-Ag mine, Bergslagen, Sweden; in which Classification and Regression Trees and K-means Clustering on the Self Organising Map were applied to X-Ray Flourescence lithogeochemistry data derived from automated core scan technology. These two methods are assessed through comparison to manual core logging. It is found that the X-Ray Fluorescence data are not sufficiently accurate or precise for the purpose of automated full lithological classification since not all elements are successfully quantified. Furthermore, not all lithologies are possible to distinquish with lithogeochemsitry alone furter hindering the success of automated lithological classification. This study concludes that; 1) K-means on the Self Organising Map is the most successful approach, however; this may be influenced by the method of domain validation, 2) the choice of ground truth for learning is important for both supervised learning and the assessment of machine learning accuracy and 3) geology, data resolution and choice of elements are important parameters for machine learning. Both the supervised method of Classification and Regression Trees and the unsupervised method of K-means clustering applied to Self Organising Maps show potential to assist core logging procedures.
56

Numerické metody pro klasifikaci metagenomických dat / Numerical methods for classification of metagenomic data

Vaněčková, Tereza January 2016 (has links)
This thesis deals with metagenomics and numerical methods for classification of metagenomic data. Review of alignment-free methods based on nucleotide word frequency is provided as they appear to be effective for processing of metagenomic sequence reads produced by next-generation sequencing technologies. To evaluate these methods, selected features based on k-mer analysis were tested on simulated dataset of metagenomic sequence reads. Then the data in original data space were enrolled for hierarchical clustering and PCA processed data were clustered by K-means algorithm. Analysis was performed for different lengths of nucleotide words and evaluated in terms of classification accuracy.
57

Analýza 3D CT obrazových dat se zaměřením na detekci a klasifikaci specifických struktur tkání / Analysis of 3D CT image data aimed at detection and classification of specific tissue structures

Šalplachta, Jakub January 2017 (has links)
This thesis deals with the segmentation and classification of paraspinal muscle and subcutaneous adipose tissue in 3D CT image data in order to use them subsequently as internal calibration phantoms to measure bone mineral density of a vertebrae. Chosen methods were tested and afterwards evaluated in terms of correctness of the classification and total functionality for subsequent BMD value calculation. Algorithms were tested in programming environment Matlab® on created patient database which contains lumbar spines of twelve patients. Following sections of this thesis contain theoretical research of the issue of measuring bone mineral density, segmentation and classification methods and description of practical part of this work.
58

Traveling Salesman Problem with Single Truck and Multiple Drones for Delivery Purposes

Rahmani, Hoda 23 September 2019 (has links)
No description available.
59

Help Document Recommendation System

Vijay Kumar, Keerthi, Mary Stanly, Pinky January 2023 (has links)
Help documents are important in an organization to use the technology applications licensed from a vendor. Customers and internal employees frequently use and interact with the help documents section to use the applications and know about the new features and developments in them. Help documents consist of various knowledge base materials, question and answer documents and help content. In day- to-day life, customers go through these documents to set up, install or use the product. Recommending similar documents to the customers can increase customer engagement in the product and can also help them proceed without any hurdles. The main aim of this study is to build a recommendation system by exploring different machine-learning techniques to recommend the most relevant and similar help document to the user. To achieve this, in this study a hybrid-based recommendation system for help documents is proposed where the documents are recommended based on similarity of the content using content-based filtering and similarity between the users using collaborative filtering. Finally, the recommendations from content-based filtering and collaborative filtering are combined and ranked to form a comprehensive list of recommendations. The proposed approach is evaluated by the internal employees of the company and by external users. Our experimental results demonstrate that the proposed approach is feasible and provides an effective way to recommend help documents.
60

Improving Knowledge of Truck Fuel Consumption Using Data Analysis

Johnsen, Sofia, Felldin, Sarah January 2016 (has links)
The large potential of big data and how it has brought value into various industries have been established in research. Since big data has such large potential if handled and analyzed in the right way, revealing information to support decision making in an organization, this thesis is conducted as a case study at an automotive manufacturer with access to large amounts of customer usage data of their vehicles. The reason for performing an analysis of this kind of data is based on the cornerstones of Total Quality Management with the end objective of increasing customer satisfaction of the concerned products or services. The case study includes a data analysis exploring how and if patterns about what affects fuel consumption can be revealed from aggregated customer usage data of trucks linked to truck applications. Based on the case study, conclusions are drawn about how a company can use this type of analysis as well as how to handle the data in order to turn it into business value. The data analysis reveals properties describing truck usage using Factor Analysis and Principal Component Analysis. Especially one property is concluded to be important as it appears in the result of both techniques. Based on these properties the trucks are clustered using k-means and Hierarchical Clustering which shows groups of trucks where the importance of the properties varies. Due to the homogeneity and complexity of the chosen data, the clusters of trucks cannot be linked to truck applications. This would require data that is more easily interpretable. Finally, the importance for fuel consumption in the clusters is explored using model estimation. A comparison of Principal Component Regression (PCR) and the two regularization techniques Lasso and Elastic Net is made. PCR results in poor models difficult to evaluate. The two regularization techniques however outperform PCR, both giving a higher and very similar explained variance. The three techniques do not show obvious similarities in the models and no conclusions can therefore be drawn concerning what is important for fuel consumption. During the data analysis many problems with the data are discovered, which are linked to managerial and technical issues of big data. This leads to for example that some of the parameters interesting for the analysis cannot be used and this is likely to have an impact on the inability to get unanimous results in the model estimations. It is also concluded that the data was not originally intended for this type of analysis of large populations, but rather for testing and engineering purposes. Nevertheless, this type of data still contains valuable information and can be used if managed in the right way. From the case study it can be concluded that in order to use the data for more advanced analysis a big-data plan is needed at a strategic level in the organization. The plan summarizes the suggested solution for the managerial issues of the big data for the organization. This plan describes how to handle the data, how the analytic models revealing the information should be designed and the tools and organizational capabilities needed to support the people using the information.

Page generated in 0.4122 seconds