• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 12
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 146
  • 51
  • 39
  • 37
  • 37
  • 36
  • 31
  • 30
  • 27
  • 22
  • 20
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Kontorskyla : Kan borrhålskyla ersätta en kylmaskin?

Eriksson, Martin, Göräng, Mikael January 2013 (has links)
Syftet med rapporten är att göra en jämförelse av två olika metoder för att kyla ett fiktivt kontorshus som är 2 000 m² stort och beläget i Västerås. För att representera ett normalt kontorshus har kyleffektbehovet valts till 50 W/m² vilket ger totala kyleffektbehovet 100 kW. I ena fallet finns en kylmaskin som kyler byggnaden och i andra byts kylmaskinen mot ett antal borrhål som motsvarar hela kyleffekten. För att kunna bedöma de tekniker som har använts har energianvändning och växthuspotential beräknats i ett livscykelperspektiv. Denna energianvändning beräknas som inbäddad energi, vilket är all energi som använts från framtagande av råmaterialen till färdiginstallerat system. Växthuspotentialen beräknas i alla dessa steg som totala koldioxidekvivalenter. Ett sätt att bedöma den energibesparing som har gjorts är med en EROI-analys. EROI beräknas som sparad energi dividerat med investerad energi och är ett dimensionslöst tal som ger en indikation på hur värdefull investeringen är från energisynpunkt. Kylbehovet som finns i byggnader består av värmeöverskott, som uppkommer av bland annat belysning, datorer, kopiatorer och värme från människor. För att kyla bort denna värme finns ett antal olika kyltekniker. Kylmaskinen betraktas ofta som det klassiska sättet att skapa kyla, men är förknippad med en stor energianvändning under dess drift, främst till kompressor-drift. Till borrhålslösningen utgör en cirkulationspump enda elbehovet för att kunna skapa kyla, eftersom denna driftenergi är mycket lägre än för kylmaskinens kompressor ses ofta kyla från borrhål som gratis- eller frikyla. Byggnadens kylenergibehov har bedömts till 40 kWh/m2, år eller totalt 80 000 kWh/år. Detta kylenergibehov ger upphov till driftenergibehov. De årliga elbehoven beräknades till 26 145 kWh/år för kylmaskinen och 2 000 kWh/år för borrhålen. Dessa elbehov motsvarar de totala energierna 4 235 460 MJ för kylmaskinen samt 324 000 MJ för borrhålen under byggnadens livslängd. För att beräkna den inbäddade energin i dessa två lösningar krävdes livscykelanalyser. Det framkom snart att det inte fanns, av denna anledning användes byggvarudeklarationer för komponenterna. Till det hämtades livscykelanalyser för material. I de fall där det funnits varken livscykelanalyser eller byggvarudeklarationer har antaganden gjorts. Det som saknades var information om vad en kylmaskin innehåller, därför har det antagits att en värmepump och kylmedelkylare tillsammans kan fungera på samma sätt som en kylmaskin. Resultatet av beräkningarna för den inbäddade energin, tillverkning och transporter, beräknades till 74 627 MJ för kylmaskinen och 480 490 MJ för borrhålslösningen. Koldioxidutsläppen i samma skeden blev 4,8 ton koldioxidekvivalenter för kylmaskinen respektive 29,5 ton koldioxidekvivalenter för borrhålen. De stora skillnader som ses i inbäddad energi och koldioxidekvivalenter uppkommer av dieselanvändning, som krävs för att borra borrhålen. I denna rapport studeras scenariot att ersätta en kylmaskin som använts i fem år med ett antal borrhål, med samma kyleffekt, om detta kan vara fördelaktigt ur en energi- och miljömässig synpunkt. Studien visar att efter bara 4,5 år använder borrhålslösningen mindre energi, trots den höga inbäddade energin vid installationen. Den stora skillnaden består av elbehovet i driftskedet, där borrhålen har en cirkulationspump som använder betydligt mindre el än kylmaskinens kompressor. Den andra kategorin som har undersökts i denna studie har varit växthuspotential i form av koldioxidekvivalenter, som uppkommer under hela livstiden för båda lösningarna. Ett av kylmaskinens utsläpp kommer från köldmediet (R407C), ett kg köldmedie motsvarar 1 526 kg koldioxidekvivalenter. Det antogs att 4 % av detta köldmedie årligen läcker till omgivningen under alla år 45 år, detta läckage gav en total växthusverkan på 46,6 ton koldioxidekvivalenter. Det förekommer även stora koldioxidutsläpp i driftskedet, eftersom elbehoven är totalt 1 177 MWh för kylmaskinen och 90 MWh för borrhålen. En litteraturstudie visade att koldioxid-utsläppen vid produktion av el varierar mycket beroende på vilka förhållanden som råder, utsläppen varierar från 0 till 1 269 kg/MWh. Det visade sig dock att borrhålets koldioxid-utsläpp är lägre än kylmaskinens även vid låga koldioxidemissioner från elproduktion. Detta beror på läckaget av köldmedie som förekommer i kylmaskinen. Resultaten visar att oavsett utsläpp från elproduktion kommer borrhålen ha en lägre växthuspotential än kylmaskinen. Om kylmaskinen skulle användas under byggnadens livslängd skulle den ha en viss inbäddad energi och om borrhålen användes under byggnadens livslängd skulle de ha en annan inbäddad energi. Skillnaden i dessa energier kallas sparad energi. Investerad energi beräknas som den energi som krävs för att ersätta kylmaskinen med borrhålen. Med sparad och investerad energi kan först nettoenergin beräknas som skillnaden mellan dessa, den blev 3 089 025 MJ. EROI beräknas sedan som kvoten av sparad och investerad energi och blev 7,4, vilket innebär att ett byte av en befintlig kylmaskin till en borrhålslösning är fördelaktig ur energisynpunkt. / The purpose of this study is to compare cooling from a refrigeration machine and a borehole system. These technologies are chosen because they are observed as each other’s opposites. A refrigeration machine is associated with a requirement of large amounts of electric energy, while the borehole system is often seen as free cooling. The study is performed on a fictional building located in Västerås. The building has an area of 2 000 m² and a cooling requirement of 50 W/m². In the scenario studied the building is already equipped with a refrigeration machine, the goal is to examine if it can be motivated to remove this machine and replace it with a borehole system. The chosen environmental impact categories are embodied energy and carbon dioxide equivalents. In order to evaluate the embodied energy, EROI (Energy return on investment) is used to calculate the energy saved by removing the refrigeration machine. For the refrigeration machine most of the energy used is during the operation phase, this is because of the compressor which is used to produce cooling energy. In the borehole system 40 % of the energy used is during the operation phase and 60 % during the manufacturing phase. The drilling used 8.1 m3 diesel fuel, which dominated both the embodied energy and the carbon dioxide emissions of the borehole system. Results show that after only 4.5 years after installation the borehole system has less total embodied energy. EROI was then calculated as saved energy divided by invested energy and the result was an EROI of 7.4. The carbon dioxide emissions from both systems are heavily dependent on the CO2-emissions from electricity generation. Though, if a refrigeration machine were used during the buildings entire lifetime the leakage of refrigerant would be big enough to counteract this dependence.
102

Site Characterization And Seismic Hazard Analysis With Local Site Effects For Microzonation Of Bangalore

Anbazhagan, P 07 1900 (has links)
Seismic hazard and microzonation of cities enable to characterize the potential seismic areas that need to be taken into account when designing new structures or retrofitting the existing ones. Study of seismic hazard and preparation of geotechnical microzonation maps will provide an effective solution for city planning and input to earthquake resistant design of structures in an area. Seismic hazard is the study of expected earthquake ground motions at any point on the earth. Microzonation is the process of sub division of region in to number of zones based on the earthquake effects in the local scale. Seismic microzonation is the process of estimating response of soil layers under earthquake excitation and thus the variation of ground motion characteristic on the ground surface. Geotechnical site characterization and assessment of site response during earthquakes is one of the crucial phases of seismic microzonation with respect to ground shaking intensity, attenuation, amplification rating and liquefaction susceptibility. Microzonation mapping of seismic hazards can be expressed in relative or absolute terms, on an urban block-by-block scale, based on local soil conditions (such as soil types) that affect ground shaking levels or vulnerability to soil liquefaction. Such maps would provide general guidelines for integrated planning of cities and in positioning the types of new structures that are most suited to an area, along with information on the relative damage potential of the existing structures in a region. In the present study an attempt has been made to characterize the site and to study the seismic hazard analysis considering the local site effects and to develop microzonation maps for Bangalore. Seismic hazard analysis and microzonation of Bangalore is addressed in this study in three parts: In the first part, estimation of seismic hazard using seismotectonic and geological information. Second part deals about site characterization using geotechnical and shallow geophysical techniques. An area of 220 sq.km, encompassing Bangalore Municipal Corporation has been chosen as the study area in this part of the investigation. There were over 150 lakes, though most of them are dried up due to erosion and encroachments leaving only 64 at present in an area of 220 sq. km and emphasizing the need to study site effects. In the last part, local site effects are assessed by carrying out one-dimensional (1-D) ground response analysis (using the program SHAKE 2000) using both borehole SPT data and shear wave velocity survey data within an area of 220 sq. km. Further, field experiments using microtremor studies have also been carried out (jointly with NGRI) for evaluation of predominant frequency of the soil columns. The same has been assessed using 1-D ground response analysis and compared with microtremor results. Further, Seed and Idriss simplified approach has been adopted to evaluate the liquefaction susceptibility and liquefaction resistance assessment. Microzonation maps have been prepared for Bangalore city covering 220 sq. km area on a scale of 1:20000. Deterministic Seismic Hazard Analysis (DSHA) for Bangalore has been carried out by considering the past earthquakes, assumed subsurface fault rupture lengths and point source synthetic ground motion model. The seismic sources for region have been collected by considering seismotectonic atlas map of India and lineaments identified from satellite remote sensing images. Analysis of lineaments and faults help in understanding the regional seismotectonic activity of the area. Maximum Credible Earthquake (MCE) has been determined by considering the regional seismotectonic activity in about 350 km radius around Bangalore. Earthquake data are collected from United State Geological Survey (USGS), Indian Metrological Department (IMD), New Delhi; Geological Survey of India (GSI) and Amateur Seismic Centre (ASC), National Geophysical Research Institute (NGRI),Hyderabad; Centre for Earth Science Studies (CESS), Akkulam, Kerala; Gauribindanur (GB) Seismic station and other public domain sites. Source magnitude for each source is chosen from the maximum reported past earthquake close to that source and shortest distance from each source to Bangalore is arrived from the newly prepared seismotectonic map of the area. Using these details, and, attenuation relation developed for southern India by Iyengar and Raghukanth (2004), the peak ground acceleration (PGA) has been estimated. A parametric study has been carried out to find fault subsurface rupture length using past earthquake data and Wells and Coppersmith (1994) relation between the subsurface lengths versus earthquake magnitudes. Further seismological model developed by Boore (1983, 2003) SMSIM program has been used to generate synthetic ground motions from vulnerable sources identified in above two methods. From the above three approaches maximum PGA of 0.15g was estimated for Bangalore. This value was obtained for a maximum credible earthquake (MCE) having a moment magnitude of 5.1 from a source of Mandya-Channapatna-Bangalore lineament. Considering this lineament and MCE, a synthetic ground motion has been generated for 850 borehole locations and they are used to prepare PGA map at rock level. The past seismic data has been collected for almost 200 years from different sources such as IMD, BARC (Gauribidanur array), NGRI, CESS, ASC center, USGS, and other public domain data. The seismic data is seen to be homogenous for the last four decades irrespective of the magnitude. Seismic parameters were then evaluated using the data corresponding to the last four decades and also the mixed data (using Kijko’s analysis) for Bangalore region, which are found to be comparable with the earlier reported seismic parameters for south India. The probabilities of distance, magnitude and peak ground acceleration have been evaluated for the six most vulnerable sources using PSHA (Probabilistic Seismic Hazard Analysis). The mean annual rate of exceedance has been calculated for all the six sources at the rock level. The cumulative probability hazard curves have been generated at the bedrock level for peak ground acceleration and spectral acceleration. The spectral acceleration calculation corresponding to a period of 1sec and 5% damping are evaluated. For the design of structures, uniform hazard response spectrum (UHRS) at rock level is developed for the 5% damping corresponding to 10% probability of exceedance in 50 years. The peak ground acceleration (PGA) values corresponding to 10% probability of exceedance in 50 years are comparable to the PGA values obtained in deterministic seismic hazard analysis (DSHA) and higher than Global Seismic Hazard Assessment Program (GSHAP) maps of Bhatia et.al (1997) for the Indian shield area. The 3-D subsurface model with geotechnical data has been generated for site characterization of Bangalore. The base map of Bangalore city (220sq.km) with several layers of information (such as Outer and Administrative boundaries, Contours, Highways, Major roads, Minor roads, Streets, Rail roads, Water bodies, Drains, Landmarks and Borehole locations) has been generated. GIS database for collating and synthesizing geotechnical data available with different sources and 3-dimensional view of soil stratum presenting various geotechnical parameters with depth in appropriate format has been developed. In the context of prediction of reduced level of rock (called as “engineering rock depth” corresponding to about Vs > 700 m/sec) in the subsurface of Bangalore and their spatial variability evaluated using Artificial Neural Network (ANN). Observed SPT ‘N’ values are corrected by applying necessary corrections, which can be used for engineering studies such as site response and liquefaction analysis. Site characterization has also been carried out using measured shear wave velocity with the help of shear wave velocity survey using MASW. MASW (Multichannel Analysis of Surface Wave) is a geophysical method, which generates a shear-wave velocity (Vs) profile (i.e., Vs versus depth) by analyzing Raleigh-type surface waves on a multichannel record. MASW system consisting of 24 channels Geode seismograph with 24 geophones of 4.5 Hz capacity were used in this investigation. The shear wave velocity of Bangalore subsurface soil has been measured and correlation has been developed for shear wave velocity (Vs) with the standard penetration tests (SPT) corrected ‘N’ values. About 58 one-dimensional (1-D) MASW surveys and 20 two-dimensional (2-D) MASW surveys has been carried out with in 220 sq.km Bangalore urban area. Dispersion curves and shear velocity 1-D and 2-D have been evaluated using SurfSeis software. Using 1-dimensional shear wave velocity, the average shear wave velocity of Bangalore soil has been evaluated for depths of 5m, 10m, 15m, 20m, 25m and 30m (Vs30) depths. The sub soil classification has been carried out for local site effect evaluation based on average shear wave velocity of 30m depth (Vs30) of sites using NEHRP (National Earthquake Hazard Research Programme) and IBC (International Building Code) classification. Bangalore falls into site class D type of soil. Mapping clearly indicates that the depth of soil obtained from MASW is closely matching with the soil layers in the bore logs. The measured shear wave velocity at 38 locations close to SPT boreholes, which are used to generate the correlation between the shear wave velocity and corrected ‘N’ values using a power fit. Also, developed relationship between shear wave velocity and corrected ‘N’ values corresponds well with the published relationships of Japan Road Association. Bangalore city, a fast growing urban center, with low to moderate earthquake history and highly altered soil structure (due to large reclamation of land) is been the focus of this work. There were over 150 lakes, though most of them are dried up due to erosion and encroachments leaving only 64 at present in an area of 220 sq km. In the present study, an attempt has been made to assess the site response using geotechnical, geophysical data and field studies. The subsurface profiles of the study area within 220sq.km area was represented by 170 geotechnical bore logs and 58 shear wave velocity profiles obtained by MASW survey. The data from these geotechnical and geophysical technique have been used to study the site response. These soil properties and synthetic ground motions for each borehole locations are further used to study the local site effects by conducting one-dimensional ground response analysis using the program SHAKE2000. The response and amplification spectrum have been evaluated for each layer of borehole location. The natural period of the soil column, peak spectral acceleration and frequency at peak spectral acceleration of each borehole has been evaluated and presented as maps. Predominant frequency obtained from both methods is compared; the correlation between corrected SPT ‘N’ value and low strain shear modulus has been generated. The noise was recorded at 54 different locations in 220sq.km area of Bangalore city using L4-3D short period sensors (CMG3T) equipped with digital data acquisition system. Predominant frequency obtained from ground response studies and microtremor measurement is comparable. To study the liquefaction hazard in Bangalore, the liquefaction hazard assessment has been carried out using standard penetration test (SPT) data and soil properties. Factor of Safety against liquefaction of soil layer has been evaluated based on the simplified procedure of Seed and Idriss (1971) and subsequent revisions of Seed et al (1983, 1985), Youd et al (2001) and Cetin et al (2004). Cyclic Stress Ratio (CSR) resulting from earthquake loading is calculated by considering moment magnitude of 5.1 and amplified peak ground acceleration. Cyclic Resistant Ratio (CRR) is arrived using the corrected SPT ‘N’ values and soil properties. Factor of safety against liquefaction is calculated using stress ratios and accounting necessary magnitude scaling factor for maximum credible earthquake. A simple spread sheet was developed to carryout the calculation for each bore log. The factor of safety against liquefaction is grouped together for the purpose of classification of Bangalore (220 sq. km) area for a liquefaction hazards. Using 2-D base map of Bangalore city, the liquefaction hazard map was prepared using AutoCAD and Arc GIS packages. The results are grouped as four groups for mapping and presented in the form of 2-dimensional maps. Liquefaction possibilities are also assessed conducting laboratory cyclic triaxial test using undisturbed soil samples collected at few locations.
103

Rapid numerical simulation and inversion of nuclear borehole measurements acquired in vertical and deviated wells

Mendoza Chávez, Alberto 10 August 2012 (has links)
The conventional approach for estimation of in-situ porosity is the combined use of neutron and density logs. These nuclear borehole measurements are influenced by fundamental petrophysical, fluid, and geometrical properties of the probed formation including saturating fluids, matrix composition, mud-filtrate invasion and shoulder beds. Advanced interpretation methods that include numerical modeling and inversion are necessary to reduce environmental effects and non-uniqueness in the estimation of porosity. The objective of this dissertation is two-fold: (1) to develop a numerical procedure to rapidly and accurately simulate nuclear borehole measurements, and (2) to simulate nuclear borehole measurements in conjunction with inversion techniques. Of special interest is the case of composite rock formations of sand-shale laminations penetrated by high-angle and horizontal (HA/HZ) wells. In order to quantify shoulder-bed effects on neutron and density borehole measurements, we perform Monte Carlo simulations across formations of various thicknesses and borehole deviation angles with the multiple-particle transport code MCNP. In so doing, we assume dual-detector tool configurations that are analogous to those of commercial neutron and density wireline measuring devices. Simulations indicate significant variations of vertical (axial) resolution of neutron and density measurements acquired in HA/HZ wells. In addition, combined azimuthal- and dip-angle effects can originate biases on porosity estimation and bed boundary detection, which are critical for the assessment of hydrocarbon reserves. To enable inversion and more quantitative integration with other borehole measurements, we develop and successfully test a linear iterative refinement approximation to rapidly simulate neutron, density, and passive gamma-ray borehole measurements. Linear iterative refinement accounts for spatial variations of Monte Carlo-derived flux sensitivity functions (FSFs) used to simulate nuclear measurements acquired in non-homogeneous formations. We use first-order Born approximations to simulate variations of a detector response due to spatial variations of formation energy-dependent cross-section. The method incorporates two- (2D) and three-dimensional (3D) capabilities of FSFs to simulate neutron and density measurements acquired in vertical and HA/HZ wells, respectively. We calculate FSFs for a wide range of formation cross-section variations and for borehole environmental effects to quantify the spatial sensitivity and resolution of neutron and density measurements. Results confirm that the spatial resolution limits of neutron measurements can be significantly influenced by the proximity of layers with large contrasts in porosity. Finally, we implement 2D sector-based inversion of azimuthal logging-while-drilling (LWD) density field measurements with the fast simulation technique. Results indicate that inversion improves the petrophysical interpretation of density measurements acquired in HA/HZ wells. Density images constructed with inversion yield improved porosity-feet estimations compared to standard and enhanced compensation techniques used commercially to post-process mono-sensor densities. / text
104

Inversion-based petrophysical interpretation of logging-while-drilling nuclear and resistivity measurements

Ijasan, Olabode 01 October 2013 (has links)
Undulating well trajectories are often drilled to improve length exposure to rock formations, target desirable hydrocarbon-saturated zones, and enhance resolution of borehole measurements. Despite these merits, undulating wells can introduce adverse conditions to the interpretation of borehole measurements which are seldom observed in vertical wells penetrating horizontal layers. Common examples are polarization horns observed across formation bed boundaries in borehole resistivity measurements acquired in highly-deviated wells. Consequently, conventional interpretation practices developed for vertical wells can yield inaccurate results in HA/HZ wells. A reliable approach to account for well trajectory and bed-boundary effects in the petrophysical interpretation of well logs is the application of forward and inverse modeling techniques because of their explicit use of measurement response functions. The main objective of this dissertation is to develop inversion-based petrophysical interpretation methods that quantitatively integrate logging-while-drilling (LWD) multi-sector nuclear (i.e., density, neutron porosity, photoelectric factor, natural gamma ray) and multi-array propagation resistivity measurements. Under the assumption of a multi-layer formation model, the inversion approach estimates formation properties specific to a given measurement domain by numerically reproducing the available measurements. Subsequently, compositional multi-mineral analysis of inverted layer-by-layer properties is implemented for volumetric estimation of rock and fluid constituents. The most important prerequisite for efficient petrophysical inversion is fast and accurate forward models that incorporate specific measurement response functions for numerical simulation of LWD measurements. In the nuclear measurement domain, first-order perturbation theory and flux sensitivity functions (FSFs) are reliable and accurate for rapid numerical simulation. Albeit efficient, these first-order approximations can be inaccurate when modeling neutron porosity logs, especially in the presence of borehole environmental effects (tool standoff or/and invasion) and across highly contrasting beds and complex formation geometries. Accordingly, a secondary thrust of this dissertation is the introduction of two new methods for improving the accuracy of rapid numerical simulation of LWD neutron porosity measurements. The two methods include: (1) a neutron-density petrophysical parameterization approach for describing formation macroscopic cross section, and (2) a one-group neutron diffusion flux-difference method for estimating perturbed spatial neutron porosity fluxes. Both methods are validated with full Monte Carlo (MC) calculations of spatial neutron detector FSFs and subsequent simulations of neutron porosity logs in the presence of LWD azimuthal standoff, invasion, and highly dipping beds. Analysis of field and synthetic verification examples with the combined resistivity-nuclear inversion method confirms that inversion-based estimation of hydrocarbon pore volume in HA/HZ wells is more accurate than conventional well-log analysis. Estimated hydrocarbon pore volume from conventional analysis can give rise to errors as high as 15% in undulating HA/HZ intervals. / text
105

PRESSURE CORE ANALYSIS: THE KEYSTONE OF A GAS HYDRATE INVESTIGATION

Schultheiss, Peter, Holland, Melanie, Roberts, John, Humphrey, Gary 07 1900 (has links)
Gas hydrate investigations are converging on a suite of common techniques for hydrate observation and quantification. Samples retrieved and analyzed at full in situ pressures are the ”gold standard” with which the physical and chemical analysis of conventional cores, as well as the interpretation of geophysical data, are calibrated and groundtruthed. Methane mass balance calculations from depressurization of pressure cores provide the benchmark for gas hydrate concentration assessment. Nondestructive measurements of pressure cores have removed errors in the estimation of pore volume, making this methane mass balance technique accurate and robust. Data from methane mass balance used to confirm chlorinity baselines makes porewater freshening analysis more accurate. High-resolution nondestructive analysis of gas-hydratebearing cores at in situ pressures and temperatures also provides detailed information on the in situ nature and morphology of gas hydrate in sediments, allowing better interpretation of conventional core thermal images as well as downhole electrical resistivity logs. The detailed profiles of density and Vp, together with spot measurements of Vs, electrical resistivity, and hardness, provide background data essential for modeling the behavior of the formation on a larger scale. X-ray images show the detailed hydrate morphology, which provides clues to the mechanism of deposit formation and data for modeling the kinetics of deposit dissociation. Gashydrate- bearing pressure cores subjected to X-ray tomographic reconstruction provide evidence that gas hydrate morphology in many natural sedimentary environments is particularly complex and impossible to replicate in the laboratory. Even when only a small percentage of the sediment column is sampled with pressure cores, these detailed measurements greatly enhance the understanding and interpretation of the more continuous data sets collected by conventional coring and downhole logging. Pressure core analysis has become the keystone that links these data sets together and is an essential component of modern gas hydrate investigations.
106

Feasibility of rock characterization for mineral exploration using seismic data

Harrison, Christopher Bernard January 2009 (has links)
The use of seismic methods in hard rock environments in Western Australia for mineral exploration is a new and burgeoning technology. Traditionally, mineral exploration has relied upon potential field methods and surface prospecting to reveal shallow targets for economic exploitation. These methods have been and will continue to be effective but lack lateral and depth resolution needed to image deeper mineral deposits for targeted mining. With global need for minerals, and gold in particular, increasing in demand, and with shallower targets harder to find, new methods to uncover deeper mineral reserves are needed. Seismic reflection imaging, hard rock borehole data analysis, seismic inversion and seismic attribute analysis all give the spatial and volumetric exploration techniques the mineral industry can use to reveal high value deeper mineral targets. / In 2002, two high resolution seismic lines, the East Victory and Intrepid, were acquired along with sonic logging, to assess the feasibility of seismic imaging and rock characterisation at the St. Ives gold camp in Western Australia. An innovative research project was undertaken combining seismic processing, rock characterization, reflection calibration, seismic inversion and seismic attribute analysis to show that volumetric predictions of rock type and gold-content may be viable in hard rock environments. Accurate seismic imaging and reflection identification proved to be challenging but achievable task in the all-out hard rock environment of the Yilgarn craton. Accurate results were confounded by crocked seismic line acquisition, low signal-to-noise ratio, regolith distortions, small elastic property variations in the rock, and a limited volume of sonic logging. Each of these challenges, however, did have a systematic solution which allowed for accurate results to be achieved. / Seismic imaging was successfully completed on both the East Victory and Intrepid data sets revealing complex structures in the Earth as shallow as 100 metres to as deep as 3000 metres. The successful imaging required homogenization of the regolith to eliminate regolith travel-time distortions and accurate constant velocity analysis for reflection focusing using migration. Verification of the high amplitude reflections within each image was achieved through integration of surface geological and underground mine data as well as calibration with log derived synthetic seismograms. The most accurate imaging results were ultimately achieved on the East Victory line which had good signal-to-noise ratio and close-to-straight data acquisition direction compared to the more crooked Intrepid seismic line. / The sonic logs from both the East Victory and Intrepid seismic lines were comprehensively analysed by re-sampling and separating the data based on rock type, structure type, alteration type, and Au assay. Cross plotting of the log data revealed statistically accurate separation between harder and softer rocks, as well as sheared and un-sheared rock, were possible based solely on compressional-wave, shear-wave, density, acoustic and elastic impedance. These results were used successfully to derive empirical relationships between seismic attributes and geology. Calibrations of the logs and seismic data provided proof that reflections, especially high-amplitude reflections, correlated well with certain rock properties as expected from the sonic data, including high gold content sheared zones. The correlation value, however, varied with signal-to-noise ratio and crookedness of the seismic line. Subsequent numerical modelling confirmed that separating soft from hard rocks can be based on both general reflectivity pattern and impedance contrasts. / Indeed impedance inversions on the calibrated seismic and sonic data produced reliable volumetric separations between harder rocks (basalt and dolerite) and softer rock (intermediate intrusive, mafic, and volcaniclastic). Acoustic impedance inversions produced the most statistically valid volumetric predictions with the simultaneous use of acoustic and elastic inversions producing stable separation of softer and harder rocks zones. Similarly, Lambda-Mu-Rho inversions showed good separations between softer and harder rock zones. With high gold content rock associated more with “softer” hard rocks and sheared zones, these volumetric inversion provide valuable information for targeted mining. The geostatistical method applied to attribute analysis, however, was highly ambiguous due to low correlations and thus produced overly generalized predictions. Overall reliability of the seismic inversion results were based on quality and quantity of sonic data leaving the East Victory data set, again with superior results as compared to the Intrepid data set. / In general, detailed processing and analysis of the 2D seismic data and the study of the relationship between the recorded wave-field and rock properties measured from borehole logs, core samples and open cut mining, revealed that positive correlations can be developed between the two. The results of rigorous research show that rock characterization using seismic methodology will greatly benefit the mineral industry.
107

Numerical analysis using simulations for a geothermal heat pump system. : Case study: modelling an energy efficient house

Ilisei, Gheorghe January 2018 (has links)
The ground source resources are becoming more and more popular and now the ground source heat pumps are frequently used for heating and cooling different types of buildings. This thesis aims at giving a contribution in the development of the thermal modelling of borehole heat storage systems. Furthermore, its objective is to investigate the possibility of implementing of a GSHP (ground source heat pump) with vertical boreholes, in order to deliver the heating and cooling demand for a passive house and to emphasize some certain advantages of this equipment even in the case of a small building (e.g. residential house). A case study is presented to a suitable modelling tool for the estimation of the thermal behaviour of these systems GSHP by combining the outcome from different modelling programs. In order to do that, a very efficient residential solar house (EFden House – a passive residential single-family house, which was projected and built in Bucharest with academic purposes) is being analysed. The numerical results are produced using the software DesignBuilder, EED (Earth Energy Designer) and a sizing method for the length of the boreholes (ASHRAE method). The idea of using 2 different modelling programs and another sizing method for the borehole heat exchanger design (ASHRAE method) is to make sure that all the calculations and results are valid and reliable when analysing such a system theoretically (in the first phases of implementing a project), before performing a geotechnical study or a thermal response test in order to assess the feasibility of such a project beforehand. The results highlight that the length of the borehole, which is the main design parameter and also a good index in estimating the cost of the system, is directly influenced by the other fundamental variables like thermal conductivity of the grout, of the soil and the heat carrier fluid. Also, some correlations between these parameters and the COP (coefficient of performance) of the system were made. The idea of sizing the length of boreholes using two different methods shows the reliability of the modelling tool. The results showed a difference of only 2.5%.  Moreover, the length of borehole is very important as it was calculated that can trigger a difference in electricity consumption of the GSHP up to 28%. It also showed the fact that the design of the whole system can be done beforehand just using modelling tools, without performing tests in-situ. The method aims at being considered as an efficient tool to estimate the length of the borehole of a GSHP system using several modelling tools. / <p>The presentation was made via Skype due to the programme being online based</p>
108

Energikartläggning av ett bostadshus från 2016 / Energy mapping of a dwelling house from 2016

El-Homsi, Patric, Fredrik, Bramstedt January 2018 (has links)
Byggnaden i undersökningen stod färdig i oktober 2016 och är belägen på Kvarnvägen 31 i Gemla. Syftet är att kartlägga energianvändningen och fastställa huruvida installation av solfångare är gynnsam. Målet är att kartlägga energiåtgången, redovisa förbättringsåtgärder och analysera de tekniska installationerna. Undersökningens metoder bestod av studiebesök, platsbesök, ritningsstudie och en okulärbesiktning med värmekamera. För att kartlägga och identifiera energiåtgången har modulering av klimatskal och installationer gjorts i VIP-Energy. Resultatet av energikartläggningen blev samma som den projekterade. Framtagen energideklaration gav byggnaden energiklass B. Att ha solfångare installerad visade sig vara teoretiskt energi- och kostnadseffektiv om de är kopplade enligt förslag. Det befintliga ventilationssystemet i byggnaden är teoretiskt fördelaktig för både avrostning och föruppvärmning. Förbättringsförslagen är att justera solfångarvinklen samt att koppla om värmetillförseln som erhålls av solfångarna. / The building in this survey was completed in October 2016 and is located at Kvarnvägen 31 in Gemla. The purpose of the study is to map the energy consumption and determine whether the installation of solar collectors is beneficial or not. The goal is to map the energy use in the building, report improvement measures and analyse the technical installations. The qualitative methods consisted of a study visit, site visits, review of drawings and an ocular survey of the building with a thermal camera. In order to calculate and analyse the building´s energy use, modelling of the building envelope components and technical installations were performed in VIP-Energy. The results of the energy survey shows that the calculated energy use for the building is similar to the projected energy use and the energy declaration places the building in energy class B. Many factors are of significant importance in optimizing solar collectors such as inclination angle, orientation and installation type. Having solar collectors installed proved to be beneficial both in terms of energy and cost if they are connected as proposed. HSB FTX is theoretically advantageous for both preheating of supply air and defrosting of the building's ventilation system. The enhancement proposals are to adjust the inclination angle of the solar collectors and to reconnect the heat input obtained from the solar collectors.
109

On the efficient and sustainable utilisation of shallow geothermal energy by using borehole heat exchangers

Hein, Philipp Sebastian 16 January 2018 (has links) (PDF)
In the context of energy transition, geothermics play an important role for the heating and cooling supply of both residential and commercial buildings. Thereby, the increasingly and intensive utilisation of shallow geothermal resources bears the risk of over-exploitation and thus poses a future challenge to ensure the sustainability and safety of such systems. Particularly, the well-established technology of borehole heat exchanger-coupled ground source heat pumps is applied for the thermal exploitation of the shallow subsurface. Due to the complexity of the involved physical processes, numerical modelling proves to be a powerful tool to enhance process understanding as well as to aid the planning and design processes. Simulations can also support the management of thermal subsurface resources, planning and decision-making on city and regional scales. In this work, the so-called dual-continuum approach was adopted and enhanced to develop a coupled numerical model considering flow and heat transport processes in both the subsurface and borehole heat exchangers as well as the heat pumps’ performance characteristics, and including the relevant phenomena influencing the underlying processes. Beside the temperature fields, the efficiency and thus the consumption of electrical energy by the heat pump is computed, allowing for the quantification of operational costs and equivalent carbon-dioxide emissions. The model is validated and applied to a number of numerical studies. First, a comprehensive sensitivity analysis on the efficiency and sustainability of such systems is performed. Second, a method for the quantification of technically extractable shallow geothermal energy is proposed. This procedure is demonstrated by means of a case study for the city of Cologne, Germany and its implications are discussed. / Im Rahmen der Energiewende nimmt die Geothermie eine besondere Rolle in der thermische Gebäudeversorgung ein. Die zunehmende, intensive Nutzung oberflächennaher geothermischer Ressourcen erhöht die Gefahr der übermäßigen thermischen Ausbeutung des Untergrundes und stellt damit eine wachsende Herausforderung für die Nachhaltigkeit und Sicherheit solcher Systeme dar. Zur Erschließung oberflächennaher geothermischer Energie wird insbesondere die etablierte Technologie Erdwärmesonden-gekoppelter Wärmepumpen eingesetzt. Aufgrund der daran beteiligten komplexen physikalischen Prozesse erweisen sich numerische Modelle als leistungsfähiges Werkzeug zur Erweiterung des Prozessverständnisses und Unterstützung des Planungs- und Auslegungsprozesses. Zudem können Simulationen zum Management thermischer Ressourcen im Untergrund sowie zur Planung und politischen Entscheidungsfindung auf städtischen und regionalen Maßstäben beitragen. Im Rahmen dieser Arbeit wurde, basierend auf dem sogenannten ”dual-continuum approach” und unter Berücksichtigung des Einflusses der Wärmepumpe, ein erweitertes gekoppeltes numerisches Modell zur Abbildung der in Erdwärmesonden und dem Untergrund stattfindenden Strömungs- und Wärmetransportprozesse entwickelt. Das Modell ist in der Lage, alle relevanten Einflussfaktoren zu berücksichtigen. Neben den Temperaturfeldern im Untergrund und der Erdwärmesonde werden die Effizienz und damit der Stromverbrauch der Wärmepumpe simuliert. Damit können sowohl die Betriebskosten als auch der äquivalente CO 2 -Ausstoß abgeschätzt werden. Das Modell wurde validiert und in einer Reihe numerischer Studien eingesetzt. Zuerst wurde eine umfassende Sensitivitätsanalyse zur Effizienz und Nachhaltigkeit entsprechender Anlagen durchgeführt. Weiterhin wird ein Verfahren zur Quantifizierung des technisch nutzbaren, oberflächennahen geothermischen Potentials vorgestellt und anhand einer Fallstudie für die Stadt Köln demonstriert, gefolgt von einer Diskussion der Ergebnisse.
110

Borrhålslängder vid pallbrytning : Undersökning om önskade borrhålslängder kan erhållas vid produktionsborrning i dagbrott

Mitander, Eva, Hauri, Oskar January 2017 (has links)
I Aitiks dagbrottsgruva i norra Lappland bryts kopparmalm med hjälp av metoden pallbrytning. I korthet innebär det att omkring 200 borrhål borras vertikalt i berget och fylls senare med sprängämne. Vid detonationen frigörs en horisontell skiva, så kallad pall, från det omgivande berget. Aitik använder i dagsläget sin borrplan för bestämning av borrhålens längd. Emellertid överensstämmer inte alltid dessa längder med de önskvärda borrhålslängderna. Målet med projektet var att föreslå förbättrade metoder och tillvägagångsätt för att nå önskad nivå vid produktionsborrning. Under projektet har det studerats hur mycket borrplanen avviker från önskade borrhålslängder. Det har också undersökts i vilken utsträckning borrmaskinerna autonomt klarar av att beräkna korrekta hål-längder med hjälp av sitt navigationssystem. En borrmaskin i Aitik använder Trimbles navigationsplattform medan de övriga fyra använder Leica. Analyserna visade att Trimbles navigationssystem var mycket tillförlitligt att använda för hål-längdsbestämning, förutsatt att noggrann kalibrering utförts. Leicas navigationssystem visade sig vara mindre tillförlitligt, då det ”svajade” i höjdled: för samma punkt i rummet visades olika höjdledskoordinater vid olika tidpunkter. Den Leica-utrustade borrmaskinen som svajade mest hade variationsbredden 31,9 cm. Rekommendationerna är: * Att om borrplanen fortsättningsvis ska användas för hållängdsbestämning, rekommenderas att ”nollning” av borrmaskinen skall ske utan rotation. Nollning är den punkt som bestäms till borrstartspunkt under borrningens utförande. Våra tester visade att om nollning sker med rotation, kan borrkronan sjunka ned 20 cm i pallytan innan borrstart registreras. På grund av detta kan borrhålen bli för långa. * Att under en testperiod, låta navigationssystemet på maskinerna autonomt beräkna borrhålens längd. Under denna period bör regelbundna kontroller av navigationssystemet ske, för att säkerställa att höjdkoordinaterna håller sig inom bestämda gränser. Kontrolleras maskinernas navigationssystem regelbundet kan ett stort statistiskt underlag skapas, vilket kan användas vid ett långsiktigt beslut om navigationssystemet fortsättningsvis skall användas för bestämning av borrhålslängder. / In the open pit mine Aitik, situated in the north of Lapland, copper ore is mined using the method pallet mining. In short, around two hundred boreholes are drilled vertically into the rock and subsequently filled with explosives. At detonation, a horizontal slice called a pallet is released from the surrounding rock. Today Aitik uses a drilling plan to decide the lengths of the boreholes. However, these lengths do not always correspond with the desired borehole lengths. The goal with the project was to find and suggest better methods and approaches to achieve desired levels of production drilling. During the project, studies have been made to see how much the drilling plan differs from the desired borehole lengths. The studies have also concerned the extent to which the drilling machines can make autonomic decisions determining the borehole lengths, using their navigation system. One drilling machine in Aitik uses a Trimble platform for navigation, while the other four use Leica. The analysis shows that the Trimble navigation platform was very reliable in deciding borehole lengths, provided that an accurate calibration was performed. The Leica navigation system turned out to be less reliable, since its height readings fluctuated: the same point in space would show as different coordinates of height at different points in time. The Leica-equipped power drill with the most fluctuation had a variation width of 31,9 cm. The recommendations are: * That, in the case of continuous use of the drill plan to determine borehole length, the “zero setting” of the power drill shall be made without drill rotation. The zero setting is starting point of the drilling operation. The tests showed that if the zero setting is done with rotation, the drill crown can sink 20 cm down into the pallet surface before drill start is registered. Because of this, the drilling holes can become too long. * That, during a test period, the navigation systems of the machines shall autonomously calculate and decide the borehole length. During this period, regular controls of the navigation systems should be made to ascertain that the coordinates of height remain within certain limits. Regular controls of the navigation systems can provide large amounts of statistical data, which can be used to make a long-term decision whether the navigation systems should continue to determine the borehole lengths.

Page generated in 0.0831 seconds