• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Vilka faktorer förekommer, och hur utförligt beskrivna är de, i kommuners beslutunderlag vid fastställande av VA-taxan?

Ahmed, Halgan, Nilsson, Sandra January 2013 (has links)
Syfte: Syftet med följande studie är att analysera vilka faktorer som förekommer i beslutsunderlagen vid fastställandet av VA-taxan, samt hur utförligt beskrivna dessa är. Detta för att utförligt förstå vad beslutsunderlagen grundar sig på i kommuner runt om i Sverige. Metod: Att utgå från den hermeneutiska filosofin, torde en ökad förståelse realiseras, då vi använder den kunskapen genom att utföra en pilotstudie. Detta innebär att vi under studiens gång har rört oss mellan teorin och empirin, och erhållit en abduktiv forskningsansats. Teori: Studien utgår ifrån tidigare forskning inom området som har använts för att identifiera våra traditionella samt icke traditionella faktorer. Empirin: Den datainsamlingsmetod vi har använt är en semitstrukturerad djupintervju baserad på fem olika kommuners VA-verksamheter, samtliga med traditionella verksamhetsformer. Analys: Vi genomförde en omfattande analys för att upptäcka skillnader och likheter mellan kommunerna samt förekomsten av faktorerna. Samtliga kommuner tar hänsyn till traditionella faktorer då dem förekommer, däremot varierar det mellan de icke traditionella faktorerna och dess förekomst. Slutsats: Studiens slutsats visar att våra traditionella faktorer är mer framstående mot det icke traditionella. Det visar sig även att de icke traditionella faktorerna förekommer i större utsträckning än vad som framgår i tidigare studier / Purpose: The aim of this study is to analyse the factors that occur in decision-making purposes in the determination of VA-fee and the detailed description of what these are. This is to give a fully recognition of the basis for decision data that is based on the municipalities in Sweden. Method: Inferring the hermeneutic philosophy probably a greater understanding realized when we test the knowledge gained by conducting a pilot study. This means that during the course of the study we moved between theory and empirical data, and obtained an adbuctive reaearch approach. Theory: The study is based on previous research in the area that has been used to identify our traditional and non-traditional factors. Empiric: The data collection method we have used is a semit-structured depth interview based on five different municipal water and sewage operations, all traditional forms. Analysis: We performed a comprehensive analysis to identify differences and similarities between the municipalities and the existence of the factors. All municipalities are taking account of traditional factors when they occur but it varies between the non-traditional factors, and its occurrence.
2

Empirical evaluation of optimization techniques for classification and prediction tasks

Leke, Collins Achepsah 27 March 2014 (has links)
M.Ing. (Electrical and Electronic Engineering) / Missing data is an issue which leads to a variety of problems in the analysis and processing of data in datasets in almost every aspect of day−to−day life. Due to this reason missing data and ways of handling this problem have been an area of research in a variety of disciplines in recent times. This thesis presents a method which is aimed at finding approximations to missing values in a dataset by making use of Genetic Algorithm (GA), Simulated Annealing (SA), Particle Swarm Optimization (PSO), Random Forest (RF), Negative Selection (NS) in combination with auto-associative neural networks, and also provides a comparative analysis of these algorithms. The methods suggested use the optimization algorithms to minimize an error function derived from training an auto-associative neural network during which the interrelationships between the inputs and the outputs are obtained and stored in the weights connecting the different layers of the network. The error function is expressed as the square of the difference between the actual observations and predicted values from an auto-associative neural network. In the event of missing data, all the values of the actual observations are not known hence, the error function is decomposed to depend on the known and unknown variable values. Multi Layer Perceptron (MLP) neural network is employed to train the neural networks using the Scaled Conjugate Gradient (SCG) method. The research primarily focusses on predicting missing data entries from two datasets being the Manufacturing dataset and the Forest Fire dataset. Prediction is a representation of how things will occur in the future based on past occurrences and experiences. The research also focuses on investigating the use of this proposed technique in approximating and classifying missing data with great accuracy from five classification datasets being the Australian Credit, German Credit, Japanese Credit, Heart Disease and Car Evaluation datasets. It also investigates the impact of using different neural network architectures in training the neural network and finding approximations for the missing values, and using the best possible architecture for evaluation purposes. It is revealed in this research that the approximated values for the missing data obtained by applying the proposed models are accurate with a high percentage of correlation between the actual missing values and corresponding approximated values using the proposed models on the Manufacturing dataset ranging between 94.7% and 95.2% with the exception of the Negative Selection algorithm which resulted in a 49.6% correlation coefficient value. On the Forest Fire dataset, it was observed that there was a low percentage correlation between the actual missing values and the corresponding approximated values in the range 0.95% to 4.49% due to the nature of the values of the variables in the dataset. The Negative Selection algorithm on this dataset revealed a negative percentage correlation between the actual values and the approximated values with a value of 100%. Approximations found for missing data are also observed to depend on the particular neural network architecture employed in training the dataset. Further analysis revealed that the Random Forest algorithm on average performed better than the GA, SA, PSO, and NS algorithms yielding the lowest Mean Square Error, Root Mean Square Error, and Mean Absolute Error values. On the other end of the scale was the NS algorithm which produced the highest values for the three error metrics bearing in mind that for these, the lower the values, the better the performance, and vice versa. The evaluation of the algorithms on the classification datasets revealed that the most accurate in classifying and identifying to which of a set of categories a new observation belonged on the basis of the training set of data is the Random Forest algorithm, which yielded the highest AUC percentage values on all of the five classification datasets. The differences between its AUC values and those of the GA, SA, PSO, and NS algorithms were statistically significant, with the most statistically significant differences observed when the AUC values for the Random Forest algorithm were compared to those of the Negative Selection algorithm on all five classification datasets. The GA, SA, and PSO algorithms produced AUC values which when compared against each other on all five classification datasets were not very different. Overall analysis on the datasets considered revealed that the algorithm which performed best in solving both the prediction and classification problems was the Random Forest algorithm as seen by the results obtained. The algorithm on the other end of the scale after comparisons of results was the Negative Selection algorithm which produced the highest error metric values for the prediction problems and the lowest AUC values for the classification problems.

Page generated in 0.0643 seconds