• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 309
  • 113
  • 91
  • 74
  • 32
  • 18
  • 9
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 823
  • 823
  • 139
  • 124
  • 121
  • 106
  • 105
  • 101
  • 97
  • 83
  • 79
  • 75
  • 73
  • 67
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

The determinants of voter turnout in OECD : An aggregated cross-national study using panel data

Olsén Ingefeldt, Niclas January 2016 (has links)
This paper examines in a descriptive manner how two groups of variables, institutional and socio-economic, correlate with voter turnout respectively and if their magnitude have changed over time in OECD countries. Previous research is often based on data from the 70’s and 80’s. Since then, voter turnout in democratic countries has decreased and more citizens do not use their fundamental democratic right of being involved in the process of choosing their representatives. To answer the paper hypotheses i.e. analyzing what factors that correlates with voter turnout, panel data between 1980 and 2012 are used which is estimated by an OLS approach. The outcome of the empirical estimations indicates that 13 out of 19 variables have a significant relationship with turnout. Most of the variables magnitudes are a bit lower than previous literature. From the time sensitivity analysis the result indicates that voters are less influenced by the significant variables that focus on the voting cost. It seems that voters in the 21st century meet voting costs in different manner than previously.
102

Analys av solelinstallationer på olika fastighetstyper : En studie om möjligheter och hinder

Nilsson, Sanna January 2016 (has links)
Generering av el med hjälp av solen står idag för en väldigt liten andel av Sveriges totala genereringav el. För att komma upp till en högre nivå krävs inte enorma solcellsparker över hela Sverige, utan tvärtom skulle det kunna gå att använda redan befintliga tak. Det förhållandet är bara ett av flera som påvisar behovet av större kunskapsspridning om dagens energisystem och solcellsteknik. Det finns också ett behov av ökad kunskap om karaktäristiska svenska hus- och taktyper och om vad som är möjligt att göra med varierande slag av solcellsteknik, för att konvertera de olika hustypernas tak till en liten energikälla. Ren solenergi kan inte ensam konkurrera ut fossila bränslen, men det finns god potential för en mycket högre andel av den än vad som är installerat idag. Wallenstam AB är ett energimedvetet fastighetsbolag som bland annat är verksamma i Göteborg. Deras intresse av förnybar energi och av att ersätta fossilt bränsle med mer miljövänlig teknik gav upphov till ett samarbete mellan författaren och Wallenstam AB. Arbetets första del syftar till att inbringa mer och fördjupad kunskap inom ämnet solceller och solcellsanläggningar. Syftet med arbetets andra del är att undersöka vilka möjligheter och hinder det finns med solcellsinstallationer på några olika typiska svenska fastighetstyper. En projektering genomförs på tre olika fastighetstyper i Göteborgsområdet; en industri/kontorslokal, ett modernt flerbostadshus och en fastighet i centrala Göteborg med både bostäder och kommersiell verksamhet. Målet är att utifrån varje fastighets förutsättningar finna den eller de mest lämpade solcellspanelerna för respektive fastighet med avseende på ekonomisk fördelaktighet, estetik samt i vad som är tekniskt möjligt utifrån exempelvis tidigare installerade energisystem. Rapporten och dess resultat är skapat utifrån litteraturstudier, platsbesök på fastigheter, mätningar på ritningar och satellitkartor, beräkningar för hand, samt modellering i PVsyst. Fortlöpande diskussion med personer inom solenergibranschen har också hållits. För att beräkna respektive anläggnings elproduktionskostnad samt för känslighetsanalyser används en webbaserad beräkningsapplikation, tillhörande rapporten El från nya och framtida anläggningar 2014. Huvudresultat är att endast två av de tekniskt möjliga anläggningarna har en elproduktionskostnad som är i närheten av det jämförande elpriset på 0,75 kr/kWh. Det är de polykristallina solcellsanläggningarna på Kvillebäcken 3:1 och Mölnlycke 1:1. Beräkningarna visar på att anläggningarna med polykristallina solcellspaneler på Kvillebäcken 3:1 och Mölnlycke Fabriker 1:1 är riktigt konkurrenskraftiga om kalkylräntan sänks från 4 % till 2 %, eller om investeringskostnaderna minskar med 15 %. På fastigheten Inom Vallgraven 26:8 ligger samtliga elproduktionskostnader över en krona per kilowattimme, men installation av solcellsteknik på en sådan central byggnad skulle kunna ha ett marknadsföringsvärde. / Generation of electricity by use of solar irradiation is today a very small part of the total electricity generation in Sweden. It is not necessary to build a great amount of solar parks all over Sweden to reach a higher level, it could instead be possible by using already existing rooftops. That is one of the situations that indicates the need of a greater knowledge dissemination, about today’s energysystem and the technology of photovoltaics. It also exists a demand of knowledge about characteristic Swedish houses and rooftops. To convert the different kinds of rooftops to small powersources, there is also a demand of knowledge about installations that is possible to do with different kinds of photovoltaics. Although, solar power alone cannot compete with fossil fuels, but it should have a good possibility to reach a much higher level than what exists today. Wallenstam AB is anenergy-conscious company within the real estate business in Gothenburg. Their interests in renewable energy and their ambition to replace fossil fuel with more environment-friendly technology facilitated this cooperation.The first part of the report aims to get more and deeper knowledge about the subject photovoltaics and solar plants. The second part of the report aims to investigate what possibilities and impediments there is with photovoltaic installations at different kinds of typical Swedish houses. A planning work is made at three different types of buildings in the area of Gothenburg. One at an industry/office space, one at a modern apartment block and one at a central building in Gothenburg that has both apartments and commercialized activity. The goal is to find the most suitable photovoltaic installation to each of the three buildings, based on economic advantageousness, appearance and esthetic, and the possibilities with the technology of earlier installed energy systems.The report and the result are formed on the basis of literature studies, site visits at the buildings, measurements at drawings and satellite maps, calculations by hand and modelling in the software PVsyst. Many discussions with people in the solar energy industry were also held. To calculate the Levelized Cost Of Energy (LCOE) and to perform the sensitivity analysis, a web-based calculator was used. The web-based calculator belongs to the report El från nya och framtida anläggningar 2014. The main result shows that only two of the possible photovoltaic installations gets a LCOE that is competitive relative the comparative electricity price at 0,75 SEK/kWh. Installations with polycrystalline solar panels at Kvillebäcken 3:1 and Mölnlycke 1:1. The result also shows that the two installations with polycrystalline solar panels at Kvillebäcken 3:1 and Mölnlycke Fabriker is really competitive if the capital interest rate is reduced from 4 % to 2 %, or if the investment cost isreduced by 15 %. At the property Inom Vallgraven 26:8, the LCOE for all possible installations are over one Swedish krona per kilowatt hour, but an installation with solar panels at a central building like that could have a marketing value. It would also show Wallenstams standpoint for renewable energy, new technology and a sustainable energy system.
103

Statistical Approaches for Handling Missing Data in Cluster Randomized Trials

Fiero, Mallorie H. January 2016 (has links)
In cluster randomized trials (CRTs), groups of participants are randomized as opposed to individual participants. This design is often chosen to minimize treatment arm contamination or to enhance compliance among participants. In CRTs, we cannot assume independence among individuals within the same cluster because of their similarity, which leads to decreased statistical power compared to individually randomized trials. The intracluster correlation coefficient (ICC) is crucial in the design and analysis of CRTs, and measures the proportion of total variance due to clustering. Missing data is a common problem in CRTs and should be accommodated with appropriate statistical techniques because they can compromise the advantages created by randomization and are a potential source of bias. In three papers, I investigate statistical approaches for handling missing data in CRTs. In the first paper, I carry out a systematic review evaluating current practice of handling missing data in CRTs. The results show high rates of missing data in the majority of CRTs, yet handling of missing data remains suboptimal. Fourteen (16%) of the 86 reviewed trials reported carrying out a sensitivity analysis for missing data. Despite suggestions to weaken the missing data assumption from the primary analysis, only five of the trials weakened the assumption. None of the trials reported using missing not at random (MNAR) models. Due to the low proportion of CRTs reporting an appropriate sensitivity analysis for missing data, the second paper aims to facilitate performing a sensitivity analysis for missing data in CRTs by extending the pattern mixture approach for missing clustered data under the MNAR assumption. I implement multilevel multiple imputation (MI) in order to account for the hierarchical structure found in CRTs, and multiply imputed values by a sensitivity parameter, k, to examine parameters of interest under different missing data assumptions. The simulation results show that estimates of parameters of interest in CRTs can vary widely under different missing data assumptions. A high proportion of missing data can occur among CRTs because missing data can be found at the individual level as well as the cluster level. In the third paper, I use a simulation study to compare missing data strategies to handle missing cluster level covariates, including the linear mixed effects model, single imputation, single level MI ignoring clustering, MI incorporating clusters as fixed effects, and MI at the cluster level using aggregated data. The results show that when the ICC is small (ICC ≤ 0.1) and the proportion of missing data is low (≤ 25\%), the mixed model generates unbiased estimates of regression coefficients and ICC. When the ICC is higher (ICC > 0.1), MI at the cluster level using aggregated data performs well for missing cluster level covariates, though caution should be taken if the percentage of missing data is high.
104

A generic predictive information system for resource planning and optimisation

Tavakoli, Siamak January 2010 (has links)
The purpose of this research work is to demonstrate the feasibility of creating a quick response decision platform for middle management in industry. It utilises the strengths of current, but more importantly creates a leap forward in the theory and practice of Supervisory and Data Acquisition (SCADA) systems and Discrete Event Simulation and Modelling (DESM). The proposed research platform uses real-time data and creates an automatic platform for real-time and predictive system analysis, giving current and ahead of time information on the performance of the system in an efficient manner. Data acquisition as the backend connection of data integration system to the shop floor faces both hardware and software challenges for coping with large scale real-time data collection. Limited scope of SCADA systems does not make them suitable candidates for this. Cost effectiveness, complexity, and efficiency-orientation of proprietary solutions leave space for more challenge. A Flexible Data Input Layer Architecture (FDILA) is proposed to address generic data integration platform so a multitude of data sources can be connected to the data processing unit. The efficiency of the proposed integration architecture lies in decentralising and distributing services between different layers. A novel Sensitivity Analysis (SA) method called EvenTracker is proposed as an effective tool to measure the importance and priority of inputs to the system. The EvenTracker method is introduced to deal with the complexity systems in real-time. The approach takes advantage of event-based definition of data involved in process flow. The underpinning logic behind EvenTracker SA method is capturing the cause-effect relationships between triggers (input variables) and events (output variables) at a specified period of time determined by an expert. The approach does not require estimating data distribution of any kind. Neither the performance model requires execution beyond the real-time. The proposed EvenTracker sensitivity analysis method has the lowest computational complexity compared with other popular sensitivity analysis methods. For proof of concept, a three tier data integration system was designed and developed by using National Instruments’ LabVIEW programming language, Rockwell Automation’s Arena simulation and modelling software, and OPC data communication software. A laboratory-based conveyor system with 29 sensors was installed to simulate a typical shop floor production line. In addition, EvenTracker SA method has been implemented on the data extracted from 28 sensors of one manufacturing line in a real factory. The experiment has resulted 14% of the input variables to be unimportant for evaluation of model outputs. The method proved a time efficiency gain of 52% on the analysis of filtered system when unimportant input variables were not sampled anymore. The EvenTracker SA method compared to Entropy-based SA technique, as the only other method that can be used for real-time purposes, is quicker, more accurate and less computationally burdensome. Additionally, theoretic estimation of computational complexity of SA methods based on both structural complexity and energy-time analysis resulted in favour of the efficiency of the proposed EvenTracker SA method. Both laboratory and factory-based experiments demonstrated flexibility and efficiency of the proposed solution.
105

Optimal shape design based on body-fitted grid generation.

Mohebbi, Farzad January 2014 (has links)
Shape optimization is an important step in many design processes. With the growing use of Computer Aided Engineering in the design chain, it has become very important to develop robust and efficient shape optimization algorithms. The field of Computer Aided Optimal Shape Design has grown substantially over the recent past. In the early days of its development, the method based on small shape perturbation to probe the parameter space and identify an optimal shape was routinely used. This method is nothing but an educated trial and error method. A key development in the pursuit of good shape optimization algorithms has been the advent of the adjoint method to compute the shape sensitivities more formally and efficiently. While undoubtedly, very attractive, this method relies on very sophisticated and advanced mathematical tools which are an impediment to its wider use in the engineering community. It that spirit, it is the purpose of this thesis to propose a new shape optimization algorithm based on more intuitive engineering principles and numerical procedures. In this thesis, the new shape optimization procedure which is proposed is based on the generation of a body-fitted mesh. This process maps the physical domain into a regular computational domain. Based on simple arguments relating to the use of the chain rule in the mapped domain, it is shown that an explicit expression for the shape sensitivity can be derived. This enables the computation of the shape sensitivity in one single solve, a performance analogous to the adjoint method, the current state-of-the art. The discretization is based on the Finite Difference method, a method chosen for its simplicity and ease of implementation. This algorithm is applied to the Laplace equation in the context of heat transfer problems and potential flows. The applicability of the proposed algorithm is demonstrated on a number of benchmark problems which clearly confirm the validity of the sensitivity analysis, the most important aspect of any shape optimization problem. This thesis also explores the relative merits of different minimization algorithms and proposes a technique to “fix” meshes when inverted element arises as part of the optimization process. While the problems treated are still elementary when compared to complex multiphysics engineering problems, the new methodology presented in this thesis could apply in principle to arbitrary Partial Differential Equations.
106

Sensitivity Analysis and Distortion Decomposition of Mildly Nonlinear Circuits

Zhu, Guoji January 2007 (has links)
Volterra Series (VS) is often used in the analysis of mildly nonlinear circuits. In this approach, nonlinear circuit analysis is converted into the analysis of a series of linear circuits. The main benefit of this approach is that linear circuit analysis is well established and direct frequency domain analysis of a nonlinear circuit becomes possible. Sensitivity analysis is useful in comparing the quality of two designs and the evaluation of gradient, Jacobian or Hessian matrices, in analog Computer Aided Design. This thesis presents, for the first time, the sensitivity analysis of mildly nonlinear circuits in the frequency domain as an extension of the VS approach. To overcome efficiency limitation due to multiple mixing effects, Nonlinear Transfer Matrix (NTM) is introduced. It is the first explicit analytical representation of the complicated multiple mixing effects. The application of NTM in sensitivity analysis is capable of two orders of magnitude speedup. Per-element distortion decomposition determines the contribution towards the total distortion from an individual nonlinearity. It is useful in design optimization, symbolic simplification and nonlinear model reduction. In this thesis, a numerical distortion decomposition technique is introduced which combines the insight of traditional symbolic analysis with the numerical advantages of SPICE like simulators. The use of NTM leads to an efficient implementation. The proposed method greatly extends the size of the circuit and the complexity of the transistor model over what previous approaches could handle. For example, industry standard compact model, such as BSIM3V3 [35] was used for the first time in distortion analysis. The decomposition can be achieved at device, transistor and block level, all with device level accuracy. The theories have been implemented in a computer program and validated on examples. The proposed methods will leverage the performance of present VS based distortion analysis to the next level.
107

Evaluation environnementale du véhicule électrique : méthodologies et application / Electric vehicle environmental assessment : methodologies and application

Picherit, Marie-Lou 27 September 2010 (has links)
Le véhicule électrique est aujourd’hui présenté comme l’une des solutions alternatives sérieuses au véhicule à moteur à combustion interne, visant à limiter la consommation d’énergies fossiles, ainsi que les émissions de polluants locaux et de gaz à effet de serre. L’évaluation des forces et faiblesses de cette technologie au regard de l’environnement est aujourd’hui limitée, compte tenu notamment du peu de retour d’expérience sur ce type de véhicules.L’objectif de ce travail de recherche est de proposer une approche combinant une connaissance fine du véhicule étudié (obtenu notamment par des essais expérimentaux et l’utilisation de modèles de consommation) et de la méthode d’évaluation environnementale Analyse de Cycle de Vie (ACV), pour identifier les paramètres clefs du bilan environnemental, et par différentes analyses de sensibilité, d’en proposer une analyse détaillée. Pour y parvenir, des essais expérimentaux ont été réalisés sur un véhicule électrique à usage essentiellement urbain et son équivalent thermique. Un modèle permet d’estimer les consommations de véhicules selon leurs spécificités (chimie et capacité de batterie, rendement de la chaîne de traction) et leurs conditions d’utilisation (trafic, usages d’auxiliaires). Des hypothèses et scénarios sont également établis sur la durée de vie des batteries qui équipent le véhicule. Les jeux de données obtenus sont mis en œuvre dans l’ACV d’un véhicule électrique, et les résultats obtenus interprétés puis comparés à ceux du véhicule thermique équivalent. Enfin, analyses de sensibilité et test de divers scénarios permettent l’identification des paramètres clefs du bilan environnemental. / Today, the electric vehicle is seen as a potent substitute to the internal combustion engine vehicle, aiming at reducing consumption of fossil fuels, and emissions of local pollutants and greenhouse gases. The assessment of strengths and weaknesses of this technology from the environmental viewpoint is currently limited, especially considering the lack of experiment feedbacks.The objective of this research is to offer an approach combining a deep understanding of the studied vehicle (through experiments and use of consumption patterns) and the environmental assessment method “Life Cycle Analysis” (LCA), to identify the key parameters of environmental appraisal, and relying on different sensitivity analysis, to propose a detailed analysis.To achieve this, experimental tests were carried out on an urban electric vehicle and its internal combustion engine equivalent. A model was built to estimate the consumption of electric vehicles according to their characteristics (chemistry and battery capacity, vehicle energy efficiency) and use (traffic, use of auxiliaries). Assumptions and scenarios are also made on the lifetimes of batteries in the vehicle. The data sets obtained are implemented in the life cycle analysis of an electric vehicle, and the results are interpreted and compared to its internal combustion engine equivalent vehicle. In the end, sensitivity analysis and test of various scenarios allow the identification of key parameters for the environmental assessment.
108

A Time-Evolving Optimization Model for an Intermodal Distribution Supply Chain Network:!A Case Study at a Healthcare Company

Johansson, Sara, Westberg, My January 2016 (has links)
Enticed by the promise of larger sales and better access to customers, consumer goods compa- nies (CGCs) are increasingly looking to evade traditional retailers and reach their customers directly–with direct-to-customer (DTC) policy. DTC trend has emerged to have major im- pact on logistics operations and distribution channels. It oers significant opportunities for CGCs and wholesale brands to better control their supply chain network by circumventing the middlemen or retailers. However, to do so, CGCs may need to develop their omni-channel strategies and fortify their supply chains parameters, such as fulfillment, inventory flow, and goods distribution. This may give rise to changes in the supply chain network at all strategic, tactical and operational levels. Motivated by recent interests in DTC trend, this master thesis considers the time-evolving supply chain system of an international healthcare company with preordained configuration. The input is bottleneck part of the company’s distribution network and involves 20% ≠ 25% of its total market. A mixed-integer linear programming (MILP) multiperiod optimization model is developed aiming to make tactical decisions for designing the distribution network, or more specifically, for determining the best strategy for distributing the products from manufacturing plant to primary distribution center and/or regional distribution centers and from them to customers. The company has got one manufacturing site (Mfg), one primary distribution center (PDP) and three dierent regional distribution centers (RDPs) worldwide, and the customers can be supplied from dierent plants with various transportation modes on dierent costs and lead times. The company’s motivation is to investigate the possibility of reduction in distribution costs by in-time supplying most of their demand directly from the plants. The model selects the best option for each customer by making trade-os among criteria involving distribution costs and lead times. Due to the seasonal variability and to account the market fluctuability, the model considers the full time horizon of one year. The model is analyzed and developed step by step, and its functionality is demonstrated by conducting experiments on the distribution network from our case study. In addition, the case study distribution network topology is utilized to create random instances with random parameters and the model is also evaluated on these instances. The computational experiments on instances show that the model finds good quality solutions, and demonstrate that significant cost reduction and modality improvement can be achieved in the distribution network. Using one-year actual data, it has been shown that the ratio of direct shipments could substantially improve. However, there may be many factors that can impact the results, such as short-term decisions at operational level (like scheduling) as well as demand fluctuability, taxes, business rules etc. Based on the results and managerial considerations, some possible extensions and final recommendations for distribution chain are oered. Furthermore, an extensive sensitivity analysis is conducted to show the eect of the model’s parameters on its performance. The sensitivity analysis employs a set of data from our case study and randomly generated data to highlight certain features of the model and provide some insights regarding its behaviour.
109

Analyse de sensibilité de modèles spatialisés : application à l'analyse coût-bénéfice de projets de prévention du risque d'inondation / Variance-based sensitivity analysis for spatially distributed models : application to cost-benefit analysis of flood risk management plansSpatially distributed model; Sensitivity analysis; Uncertainty; Scale; Geostatistics;CBA; Flood; Damage.

Saint-Geours, Nathalie 29 November 2012 (has links)
L'analyse de sensibilité globale basée sur la variance permet de hiérarchiser les sources d'incertitude présentes dans un modèle numérique et d'identifier celles qui contribuent le plus à la variabilité de la sortie du modèle. Ce type d'analyse peine à se développer dans les sciences de la Terre et de l'Environnement, en partie à cause de la dimension spatiale de nombreux modèles numériques, dont les variables d'entrée et/ou de sortie peuvent être des données distribuées dans l'espace. Le travail de thèse réalisé a pour ambition de montrer comment l'analyse de sensibilité globale peut être adaptée pour tenir compte des spécificités de ces modèles numériques spatialisés, notamment la dépendance spatiale dans les données d'entrée et les questions liées au changement d'échelle spatiale. Ce travail s'appuie sur une étude de cas approfondie du code NOE, qui est un modèle numérique spatialisé d'analyse coût-bénéfice de projets de prévention du risque d'inondation. On s'intéresse dans un premier temps à l'estimation d'indices de sensibilité associés à des variables d'entrée spatialisées. L'approche retenue du « map labelling » permet de rendre compte de l'auto-corrélation spatiale de ces variables et d'étudier son impact sur la sortie du modèle. On explore ensuite le lien entre la notion d'« échelle » et l'analyse de sensibilité de modèles spatialisés. On propose de définir les indices de sensibilité « zonaux » et « ponctuels » pour mettre en évidence l'impact du support spatial de la sortie d'un modèle sur la hiérarchisation des sources d'incertitude. On établit ensuite, sous certaines conditions, des propriétés formelles de ces indices de sensibilité. Ces résultats montrent notamment que l'indice de sensibilité zonal d'une variable d'entrée spatialisée diminue à mesure que s'agrandit le support spatial sur lequel est agrégée la sortie du modèle. L'application au modèle NOE des méthodologies développées se révèle riche en enseignements pour une meilleure prise en compte des incertitudes dans les modèles d'analyse coût-bénéfice des projets de prévention du risque d'inondation. / Variance-based global sensitivity analysis is used to study how the variability of the output of a numerical model can be apportioned to different sources of uncertainty in its inputs. It is an essential component of model building as it helps to identify model inputs that account for most of the model output variance. However, this approach is seldom applied in Earth and Environmental Sciences, partly because most of the numerical models developed in this field include spatially distributed inputs or outputs . Our research work aims to show how global sensitivity analysis can be adapted to such spatial models, and more precisely how to cope with the following two issues: i) the presence of spatial auto-correlation in the model inputs, and ii) the scaling issues. We base our research on the detailed study of the numerical code NOE, which is a spatial model for cost-benefit analysis of flood risk management plans. We first investigate how variance-based sensitivity indices can be computed for spatially distributed model inputs. We focus on the “map labelling” approach, which allows to handle any complex spatial structure of uncertainty in the modelinputs and to assess its effect on the model output. Next, we offer to explore how scaling issues interact with the sensitivity analysis of a spatial model. We define “block sensitivity indices” and “site sensitivity indices” to account for the role of the spatial support of model output. We establish the properties of these sensitivity indices under some specific conditions. In particular, we show that the relative contribution of an uncertain spatially distributed model input to the variance of the model output increases with its correlation length and decreases with the size of the spatial support considered for model output aggregation. By applying our results to the NOE modelling chain, we also draw a number of lessons to better deal with uncertainties in flood damage modelling and cost-benefit analysis of flood riskmanagement plans.
110

Podnikatelský plán nízkonákladového golfového hřiště / Business plan of a low-cost golf course

Bartulec, Jan January 2010 (has links)
The thesis is divided in two different parts. In the theoretical one we can find information about golf, its history, evolution and an opinion about it. This part also deals with the operating principles of a golf resort, an ownership structure and an existence of a golf club. At the end of theoretical part is described the structure of a business plan in detail. The business plan is based on a real project of a low-cost golf course. It respects the investor's requirements and is meant to be used as a manual for operating of this golf course. We can also find the procedure for establishing a golf club, the applied methods of segment analysis and the prediction of activities connected to the club in the future that respect the factor of risk.

Page generated in 0.0295 seconds