621 |
The use of artificial intelligence techniques for power analysisGwyn, Bryan James January 1999 (has links)
This thesis reports the research carried out into the use of Artificial Intelligence techniques for Power System Analysis. A number of aspects of Power System analysis and its management are investigated and the application of Artificial Intelligence techniques is researched. The use of software tools for checking the application of power system protection systems particularly for complex circuit arrangements was investigated. It is shown that the software provides a more accurate and efficient way of carrying out these investigations. The National Grid Company's (plc, UK) use of software tools for checking the application of protection systems is described, particularly for complex circuit arrangements such as multi-terminal circuits and composite overhead line and cable circuits. Also described, is how investigations have been made into an actual system fault that resulted in a failure of protection to operate. Techniques using digital fault records to replay a fault into a static model of protection are used in the example. The need for dynamic modelling of protection is also discussed. Work done on automating the analysis of digital fault records using computational techniques is described. An explanation is given on how a rule-based system has been developed to classify fault types and analyse the response of protection during a power system fault or disturbance in order to determine correct or incorrect operation. The development of expert systems for on-line application in Energy Control Centres (ECC), is reported. The development of expert systems is a continuous process as new knowledge is gained in the field of artificial intelligence and new expert system development tools are built. Efforts are being made for on-line application of expert systems in ECC as preventive control under normal/alert conditions and as a corrective control during a disturbance. This will enable a more secure power system operation. Considerable scope exists in the development of expert systems and their application to power system operation and control. An overview of the many different types of Neural Network has been carried out explaining terminology and methodology along with a number of techniques used for their implementation. Although the mathematical concepts are not new, many of them were recorded more than fifty years ago, the introduction of fast computers has enabled many of these concepts to be used for today's complex problems. The use of Genetic Algorithm based Artificial Neural Networks is demonstrated for Electrical Load Forecasting and the use of Self Organising Maps is explored for classifying Power System digital fault records. The background of the optimisation process carried out in this thesis is given and an introduction to the method applied, in particular Evolutionary Programming and Genetic Algorithms. Possible solutions to optimisation problems were introduced to be either local or global minimum solutions with the latter being the desirable result. The evolutionary computation that has potential to produce a global solution to a problem due to the searching mechanisms that are inherent to the procedures is discussed. Various mechanisms may be introduced to the genetic algorithm routine which may eliminate the problems of premature convergence, thus enhancing the methods' chances of producing the best solution. The other, more traditional methods of optimisation described include Lagrange multipliers, Dynamic Programming, Local Search and Simulated annealing. Only the Dynamic Programming method guarantees a global optimum solution to an optimisation problem, however for complex problems, the method could take a vast amount of time to locate a solution due to the potential for combinatorial explosion since every possible solution is considered. The Lagrange multiplier method and the local search method are useful for quick location of a global minimum and are therefore useful when the topography of the optimisation problem is uni-modal. However in a complex multi-modal problem, a global solution is less likely. The simulated annealing method has been more popular for solving complex multi-modal problems since it includes techniques for the search to avoid being trapped in local minimum solutions. Artificial Neural Network and Genetic Algorithm have been used to design a neural network for short-term load forecasting. The forecasting model has been used to produce a forecast of the load in the 24 hours of the forecast day concerned, using data provided by an Italian power company. The results obtained are promising. In this particular case, the comparison between the results from the Genetic Algorithm - Artificial Neural Network and Back Propagation - Neural Network shows that the Genetic Algorithm - Artificial Neural Network does not provide a faster solution than the Back Propagation - Neural Network. The application of Evolutionary Programming to fault section estimation is investigated and a comparison made with a Genetic Algorithm approach. To enhance service reliability and to reduce power outage, rapid restoration of power system is required. As a first step of restoration, the fault section should be accurately estimated quickly. The Fault Section Estimation (FSE) identifies fault components in a power system by using information on the operation of protection relays and circuit breakers. However this task is difficult especially for cases where the relay or circuit breaker fails to operate and for multiple faults. An Evolutionary Programming (EP) approach has been developed for solving the FSE problem including malfunctions of protection relays and/or circuit breakers and multiple fault cases. A comparison is made with the Genetic Algorithm (GA) approach at the same time. Two different population sizes are tested for each case. In general, EP showed faster computational speed than GA with an average factor of 13 times more. The final results were almost the same. The convergence speed (the required number of generations to get an optimum result) is a very important factor in real time applications. Test results show that EP is better than GA. However, as both EP and GA are evolutionary algorithms, their efficiencies are largely dependent on the complexity of the problem that might differ from case to case. The use of Artificial Neural Networks to classify digital fault records is investigated showing theat Self Organising Maps could be useful for classifying records if integrated into other systems. Digital fault records are a very useful source of information to the protection engineer to assist with the investigation of a suspected unwanted operation or failure to operate of a protection scheme. After a widespread power system disturbance, due to a storm for example, a large number of fault records can be produced. A method of automatically classifying fault records would be very helpful in reducing the amount of time spent in manual analysis, thus assisting the engineer to focus on records that need in depth analysis. Fault classification using rule base methods have already been developed. The completed work is preliminary in nature and an overview of an extension to this work, involving the extraction of frequency components from the digital fault record data and using these as input to a SOM network, is described.
|
622 |
A new energy absorber for earthquake resistant buildingsMonir, Habib Saeed January 2001 (has links)
The research work which has been reported in this thesis is associated with the design of an energy absorbing device. The device as well as being capable to absorb high amount of energy, possess all the necessary properties of a structural member. Most energy absorbing devices have not the necessary conditions to be used as a structural members. Their problems have been demonstrated in chapter 1 and chapter 3 of the this thesis. In order to overcome these problems an alternative kind of energy absorbing device, has been proposed. The inversion of tubes has been proposed as the basic of the work. This is a wellknown energy-absorbing principle and has been widely used in industry and many mechanical engineering cases as the basic of design. However, the device has some disadvantages and these required improvement. The following steps have been taken to improve the energy absorbing characteristics: 1- Normally the energy absorbing capacity of the device is limited due to buckling. This problem has been improved by including an adhesive within the device. 2- The second problem in this energy-absorbing device is that its elastic stiffness is very low and this is unacceptable for a structural member. The elastic stiffness has been improved by forming a stiff shell at the top of the tube. 3- The device undergoes a significant change in length during the energy absorbing process and if it is not compensated in some way, the device will be useless in the subsequent cycles of vibration. A special mechanism has been installed in the device to solve this problem. This enables the deformation to be compensated after the absorption process. Two major applications for the device have been considered to be studied in the thesis: First because of its special response at high speed loading, it has been installed in a simply supported framework. The middle member of this framework has been replaced by the energy absorbing device and the behaviour of the framework has been analysed under an explosive load. In order to determine the advantages of the installation of this device in a framework, this framework has also been analysed without the inclusion of the device. The comparison of these results showed that when the framework is equipped with the absorber, a great reduction in the forces and strains of the members of the framework have been achieved. The framework has become 2.5 times stronger, when just one device was used in the frame. In the second application of the device, its behaviour has been studied as an absorber of a first soft story method. The first soft story is one of the ideas which has been presented for the isolation of buildings from earthquake effects, however, no proper absorber has been suggested to be used in this method. This device has an excellent performance in this regard, because of its shortening ability and its compact form along with its high energy absorbing capacity. Two energy absorbing devices were inserted in the braces of a single degree freedom structure and subjected to a high rate base acceleration. For a comparison, the behaviour of the frame, when it was not equipped with the devices, was also analysed. The results indicated that by the inclusion of the absorber, the acceleration has been decreased more than three times. The forces in the members were also three times less than the frame without the device. Finally, the behaviour of a multi story building has been examined when it was equipped with two energy absorbing device in the braces of the first floor. The results showed that a great reduction in the accelerations, velocities and also the forces and moments has been achieved, as was the case in the previous example. By using this absorber in the braces, the accelerations and velocities were four times less than the case which the frame did not include any absorber. In simple words, this energy absorber is similar to the dampers, which are used in the vehicles to reduce vibrations, but with this difference that the dampers in the car are active all the time while this damper is activated only when a high rate loading is applied.
|
623 |
The dynamics of floating bodies in a regular wave environmentWilliams, K. J. January 1986 (has links)
Theoretical and numerical investigations have been carried out on the use of the Integral Equation Method of solution for the Potential Theory problem of the interaction between a floating body and a train of regular waves in a two-dimensional domain. In particular, a numerical study has been carried out of the indirect method of solution of the integral equation resulting from a distribution of Green's Function sources over a boundary coincident with the immersed surface of the body. It is demonstrated that a significant increase in solution efficiency, with no loss of precision, can be effected by improvements in the general numerical techniques of solution together with the use of a polynomial type distribution of elements over the source boundary. It is also demonstrated that significant improvements in solution accuracy for rectangular aspects can be achieved by a slight 'rounding' of the submerged edges of the mathematical model. An experimental investigation of the interaction between a train of regular waves and a substantially rectangular floating body includes measurements of the reflection and transmission characteristics, for both the fixed and floating mode of the body, together with measurements of the body motions. The primary objective of the experimental study is the validation of theoretically predicted interaction parameters derived from the above methods. The experimental program was designed both to determine the extent of validity of Potential Theory within regimes where diffraction effects predominate, and also to determine the conditions under which the use of Potential Theory alone becomes invalid due to the significant presence of non-linear effects. As a consequence of the results of this investigation, recommendations are made both with regard to the possible achievement of further improvements in solution efficiency and, more importantly, with regard to a general improvement of solution accuracy by the inclusion of the above-mentioned non-linear effects in the theoretical formulations.
|
624 |
Fretting damage of high carbon chromium bearing steelKuno, Masato January 1988 (has links)
This thesis consists of four sections, the fretting wear properties of high carbon chromium bearing steel; the effect of debris during fretting wear; an introduction of a new fretting wear test apparatus used in this study; and the effects of fretting damage parameters on rolling bearings. The tests were operated under unlubricated conditions. Using a crossed cylinder contact arrangement, the tests were carried out with the normal load of 3N, slip amplitude of 50µm, and frequency of 30Hz at room temperature. The new fretting wear test rig consists of a sphere-on-plate arrangement, and the normal load and slip amplitude were variously changed. Using the new test rig, the tests were performed both at room temperature and 200℃, and tensile stresses were applied to the lower stationary specimens during the fretting wear tests. In the fretting wear tests after tempering at 200,230,260 and 350℃ in air, the high carbon chromium bearing steel showed low coefficients of friction due to a glaze type oxide film. In the fretting wear tests at 200℃, a very low coefficient of friction was obtained. Consequently, the oxide films on high carbon chromium bearing steel tempered at 200,230,260 and 350℃ were thought to be protective in fretting damage. Fretting wear volumes were measured using different specimen combinations and fretting oscillatory directions relative to the axes of the cylindrical specimens, although of the same material couples. It has been found that fretting wear volume is significantly governed by frictional energy (fretting damage per unit area) and frequency of metal-to-metal contact, as determined by electrical contact resistance measurements. Metal-to-metal contact was observed throughout the whole stage of fretting wear even in the case of full slip fretting wear. Fretting crack initiation is encouraged but fretting crack propagation rate is not significantly affected by high normal loads. Compressive residual stresses in the subsurface have little influence on crack initiation, but have a large influence on crack propagation rate. In the study of fracture induced by fretting wear, a critical slip amplitude which led to the shortest fracture life was identified. With the critical slip amplitude (35 µm), a higher coefficient of friction was obtained, and this result suggested a significant effect of coefficient of friction on fracture induced by fretting wear (or fretting fatigue). The mechanisms of fretting wear and fretting fatigue were also discussed. Fretting wear is predominantly governed by the total tangential shear strain due to fretting oscillation. In contrast, fretting fatigue is dominated by the maximum alternating tangential shear strain energy. As coefficient of friction affects significantly both the total tangential shear strain and the maximum alternating tangential shear strain energy, it is thought to be the most important factor which needs to be controlled to reduce damage by both fretting wear and fretting fatigue.
|
625 |
Vibrational energy flow in structuresPalmer, James Dirk January 1994 (has links)
This investigation explores the use of an approximate energy flow approach to provide a global modelling tool capable of predicting the pattern and level of vibrational energy flow in complex structures. The modelling approach is based on a differential control volume formulation which, by virtue of its simplified nature, describes the flow of mechanical energy within a structural component in a manner analogous to the flow of thermal energy in heat conduction problems. For complex structures the approach can be implemented using existing finite element software through an analogy between the thermal and vibrational systems. Energy flow predictions along simple beam structures, obtained using the energy flow approach, are compared to "exact" analytical solutions and experimental structural intensity measurements on real structures. This provides useful insight into the capabilities and requirements of the approach, such as the quality of model predictions at lower frequencies and the accuracy requirement for modelling parameters. The task of modelling the transmission of vibrational energy in practical engineering structures is complicated by the partial reflection of incident wave energy at structural discontinuities. Methods to account for this effect are discussed and an approach is developed which can be incorporated into the finite element global modelling scheme. This is used to model a complex multiple transmission path structure which illustrates the ability of the approach to form an effective transmission path ranking tool. Finally, the approach is used to build a representative energy flow model of a ribbed bulkhead structure typical of marine applications. A wavenumber measurement technique is used to assess the wave transmission characteristics of this structure which exhibit strong directional dependence. Predictions provided by the energy flow model are in good general agreement with energy flow measurements obtained from the real structure. Throughout these modelling exercises particular attention is paid to the provision of suitable estimates of the parameters (damping, group velocity, power input and transmission efficiency) on which the accuracy of the model predictions rely. This investigation represents a significant contribution to current knowledge regarding the use of the energy flow approach and its ability to provide representative models of real structures. Although further research is still required, considerable progress has been made and the work documented here provides the framework for a global modelling tool using existing finite element software.
|
626 |
Composite steel beams using precast concrete hollow core floor slabsLam, Dennis January 1998 (has links)
The main aim of this thesis is to develop an insight into the behaviour of composite floors that utilise steel beams acting in combination with precast concrete hollow core floor slabs and to produce design recommendations for use by industry for this type of construction. Full scale bending tests of proprietary precast prestressed concrete hollow core unit floor slabs attached through 19mm diameter headed shear studs to steel Universal Beams (UB) have been carried out to determine the increased strength and stiffness when composite action is considered. The results show the bending strength of the composite beam to be twice that of the bare steel beam, and its flexural stiffness to be more than trebled. In addition to the beam tests, isolated push-off tests and horizontal eccentric compression tests were used to study the horizontal interface shear resistance of the headed studs and the strength of the slab, respectively. Maximum resistances were compared with the predictions of the Eurcode EC4, and a reduction formula for the precast effect derived. In addition to the experimental investigations, finite element (FE) studies were also conducted using the FE package ABAQUS to extend the scope of the experimental work. Results show a 2-dimensional plane stress analysis to be sufficiently accurate, providing the correct material input data obtained from isolated push-off and compression tests are used. The FE model for the composite beam was designed and validated using the full scale beam tests. A parametric study, involving 45 analyses, was carried out to cover the full range of UB sizes and floor depths used in practice. From the finite element work, design charts are formulated which may be used to simplify the design rules. Given the results of this work, a full interaction composite beam design may be carried out using the proposed design equations. The results show that precast slabs may be used compositely with steel UB's in order to increase both flexural strength and stiffness at virtually no extra cost, except for the headed shear studs. The failure mode is ductile, and may be controlled by the correct use of small quantities of transverse reinforcement and insitu infill concrete.
|
627 |
Heave compensation using time-differenced carrier observations from low cost GPS receiversBlake, Stephen James January 2008 (has links)
Vertical reference for hydrographic survey can be provided in two ways: through the use of an expensive and very accurate GPS-aided INS system, or through the classical method of compensating for heave motion measured on board the vessel and tide data taken from a nearby tide gauge. Whilst the GPS-aided INS approach offers significant advantages in terms of accuracy their high cost has prohibited their widespread use within the hydrographic survey industry and the classical method is still prevalent. Heave motion of a survey vessel has traditionally been measured using inertial technologies, which can be expensive and have problems with usability and instability, resulting in higher survey costs and a significant hydrographer input burden. Heave can also be measured through the use GPS receivers by the differencing of measured carrier phase pseudo-range from adjacent epochs and the recent introduction by U-Blox of the Antaris AEK-4T, an off the shelf low cost GPS receiver capable of measuring and recording the carrier phase pseudo-range observable, has allowed the exploration of a novel method of measuring and compensating for vessel heave using off the shelf low cost GPS receivers. The work presented in this thesis details a method of compensating for vessel heave motion in bathymetry data that has been developed specifically for use with the U-Blox Antaris receiver. The technique is based on the production of highly accurate velocity estimates using the carrier phase observable. Carrier phase measurements are differenced across adjacent epochs to give relative delta range estimates between receiver and satellite along the direct line of sight, which are then processed to calculate an accurate estimate of receiver delta position across the epoch, a measurement analogous to receiver velocity. This technique has been termed Temporal Double Differencing (TDD). Integrated vertical velocity estimates produce the relative vertical displacement of the vessel over time. Because of bias errors in the velocity estimates from TDD, this vertical displacement is subject to drift. The drift is removed by passing the data through a high-pass filter designed to stop the drift frequencies yet pass the required frequencies of vertical vessel motion. An obvious advantage of this technique over conventional technologies is cost. Instruments currently on the market are centred on inertial sensors and generally have prices ranging from £12,000 to £25,000. Low cost GPS receivers are priced at around £200 and so this technique can have sizeable cost implications for the hydrographic survey industry. In addition the nature of the TDD algorithm results in a heave sensing technology that is not subject to turn induced heave which can affect inertial based sensors, and also imposes no requirement on the user to account for parameters such as vessel heave characteristics and current heave state. A further advantage over interferometric GPS heave compensation techniques is that the TDD algorithm is stand-alone and requires no reference receiver. Two trials have been undertaken to test the ability of the low cost U-Blox receiver to record accurate phase pseudo-range observables and subsequently produce a heave estimate: a Spirent GPS hardware simulator trial, and a sea trial. The simulator trial has been the first to quantify the errors associated with the measurement of carrier phase pseudo-range observables using low cost commercially available receivers. The trial used three separate receivers: a Novatel OEM4, a Leica 530 and a low cost U-Blox Antaris. Three scenarios were programmed into the simulator to rigorously test the effects of receiver quality and receiver dynamics on the resulting velocity estimates using the TDD algorithm. The sea trial involved fitting various sensors to the vessel including a Honeywell HG1700 IMU, an Applanix POS-RS GPS-aided INS system and the same three GPS receivers as used in the simulator trial. The POS-RS system and the inertial based heave sensor were used to provide a reference against which the novel low cost heave output could be compared. The comprehensive nature of the sea trial makes it the first work to compare the results from the TDD heave algorithm using varying grades of receiver, and against truth data from both an inertial based heave system and a GPS-aided INS. The results of the simulator trial have shown that under static conditions the TDD velocity estimation using the U-Blox Antaris is of comparable quality to that produced using both the Novatel OEM4 and the Leica 530. Under dynamic conditions the performance of the U-Blox Antaris is greatly degraded when undergoing large accelerations, an artefact of the inferior componentry used in the signal tracking loops. The sea trial has demonstrated the ability of the TDD heave algorithm developed for use with commercially available low cost GPS receivers to measure vessel heave to a similar standard as inertial based technologies at a fraction of the cost and with greatly reduced instability and usability issues that are traditionally associated with inertial based heave sensors.
|
628 |
Critical evaluation of some suction measurement techniquesElgabu, Hesham M. January 2013 (has links)
Suction is an important stress-state variable of unsaturated soils. The magnitude of suction affects the shear strength, the hydraulic conductivity, and the volume change behaviour of unsaturated soils. The measurement of soil suction is a prerequisite for the characterisation of unsaturated soils. Soil suction can be determined either by adopting direct or indirect measurement techniques. Despite several techniques available currently for measuring and controlling matric and total suctions of soils in the laboratory, several aspects related to various suction measurement techniques, such as the water phase continuity in null- type tests and compatibility of test results from various measuring techniques are yet to be explored in detail. Similarly, studies concerning determination of air-entry values (AEVs) and residual suctions of soils that exhibit volume change during the drying process are limited. Suctions of two soils from Libya (a silty sand and an inorganic clay with intermediate plasticity) were experimentally measured using null- type axis-translation, filter paper, and chilled-mirror dew-point techniques. Axis-translation and vapour equilibrium techniques were used for establishing the drying and wetting suction-water content soil-water characteristic curves (SWCCs) of the soils. Compacted soil specimens were prepared by varying moulding water content, dry density, compaction type, and compaction effort in order to investigate the influence of initial compaction conditions on measured suctions and SWCCs of the soils. The water content-void ratio relationships (shrinkage curves) of the soils from Clod tests were used in conjunction with the drying suction-water content SWCCs to establish the suction-degree of saturation SWCCs that enabled determination of the air-entry values (AEVs) and residual suctions of the soils. Initially saturated slurried specimens of the soils were also considered for comparing with the test results of compacted soil specimens. The test results from the investigation showed that the influence of compaction conditions on SWCCs of the soils was distinct only at a low suction range, whereas their impact was insignificant at higher suctions. The volume change of the soils during the drying process had significant impact on the AEVs and residual suctions. For initially saturated slurried specimens, the AEVs and the residual suctions of the soils determined form the suction-water content SWCCs were found to be distinctly lower than their counterparts determine from the suction-degree of saturation SWCCs. Suctions corresponding to the plastic limits of the soils agreed well with those determined from suction-degree of saturation SWCCs, whereas suctions corresponding the shrinkage limits overestimated the AEVs. An increase in the chamber air pressure soon after the null-type tests were completed clearly indicated that the water phase continuity between the water in the soil specimens, the water in the ceramic disk, and the water in the compartment below the ceramic disk was lacking for all specimens tested. Soil specimens with higher water contents created better continuity in the water phase. At high suction range, the test results from the techniques based on vapour equilibrium (i.e., non contact filter paper, salt solution and chilled-mirror dew-point tests) showed very good compatibility, whereas differences were noted between the test results at low suction range from the techniques that are based on liquid phase equilibrium (i.e., pressure plate and null-type tests).
|
629 |
Flow of self-compacting concreteDeeb, Rola January 2013 (has links)
This thesis describes the steps taken to develop self-compacting concrete (SCC) mixes with and without steel fibres. For the self-compacting concrete mixes without steel fibres the fulfilment of the flow and cohesiveness criteria are found to be sufficient for the mix design. However, for the design of self-compacting concrete mixes with steel fibres it is found that they must additionally meet the passing ability criterion. The plastic viscosity of SCC mixes with and without fibres so developed is accurately estimated using a micromechanical procedure based on the measured viscosity of the cement paste alone and on the mix proportions. A Lagrangian particle based method, the smooth particle hydrodynamics (SPH), is used to model the flow of SCC mixes with or without short steel fibres in slump and L-box tests. An incompressible SPH method is employed to simulate the flow of such non-Newtonian fluids whose behaviour is described by a Bingham-type model, in which the kink in the shear stress versus shear strain rate diagram is first appropriately smoothed out. The basic equations solved in the SPH are the incompressible mass conservation and momentum equations. The simulation of the SCC mixes emphasised the distribution of large aggregate particles of different sizes throughout the flow in the 3-dimensional configurations. On the other hand, the simulation of high strength SCC mixes which contain steel fibres focused on the distribution of fibres and their orientations during the flow in the 3-dimensional configuration. The capabilities of this methodology were validated by comparing the simulation results with the slump flow and L-box tests carried out in the laboratory. A simple method is developed to assess the orientation and distribution of short steel fibres in self-compacting concrete mixes during the flow. A probability density function (PDF) is introduced to represent the fibre orientation variables in three dimensions. Moreover, the orientation variables of each individual fibre in an arbitrary two dimensional cross-section have been calculated using the geometrical data obtained from the three dimensional simulations. This is useful to determine the fibre orientation factor (FOF) in practical image analysis on cut sections. The simulations of SCC mixes are also used as an aid at the mix design stage of such concretes. Self-compacting concrete mixes with and without steel fibres are proportioned to cover a wide range of plastic viscosity. All these mixes meet the flow and passing ability criteria, thus ensuring that they will flow properly into the formwork.
|
630 |
Enhancing BIM-based data transfer to support the design of low energy buildingsCemesova, Alexandra January 2013 (has links)
Sustainable building rating systems and energy efficiency standards promote the design of low energy buildings. The certification process is supported by Building Performance Simulation (BPS), as it can calculate the energy consumption of buildings. However, there is a tendency for BPS not to be used until late in the design process. Building Information Modelling (BIM) allows data related to a buildings design, construction and operation to be created and accessed by all of the project stakeholders. This data can also be retrieved by analysis tools, such as BPS. The interoperability between BIM and BPS tools however is not seamless. The aim of this thesis is to improve the building design and energy analysis process by focusing on interoperability between tools, and to facilitate the design of low energy buildings. The research process involved the following: undertaking a literature review to identify a problematic area in interoperability, extending an existing neutral data transfer schema, designing and implementing a prototype which is based on the extension, and validating it. The schema chosen was the Industry Foundation Classes. This can describe a building throughout its lifecycle, but it lacks many concepts needed to describe an energy analysis and its results. It was therefore extended with concepts taken from a BPS tool, Passive House Planning Package, which was chosen for its low interoperability with BIM tools. The prototype can transfer data between BIM and BPS tools, calculate the annual heat demand of a building, and inform design decision-making. The validation of the prototype was twofold; case studies and a usability test were conducted to quantitatively and qualitatively analyse the prototype. The usability testing involved a mock-up presentation and online surveys. The outcome was that the tool could save time and reduce error, enhance informed decision making and support the design of low energy buildings.
|
Page generated in 0.0445 seconds