• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 707
  • 707
  • 669
  • 165
  • 110
  • 71
  • 70
  • 62
  • 58
  • 50
  • 46
  • 44
  • 44
  • 44
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Graded organisation of fibronectin to tune cell behaviour

Grigoriou, Eleni January 2017 (has links)
Cells are in constant and dynamic interactions with the extracellular environment. They receive several inputs involved in the regulation of cell behaviour. Fibronectin, an abundant protein of the ECM, contains multiple binding domains and binds to cell receptors, growth factors and other ECM proteins. FN undergoes conformational changes through cell-generated contractile forces which consequently affects cell response. Tissue engineering aims at engineering biomaterials that recreate the in vivo ECM. In addition to biomaterials, stem cells have emerged as a promising source due to their inherent differentiation potential. In this work, the role of poly acrylates in controlling human mesenchymal stem cell behaviour (hMSCs) was explored. Particularly, a series of copolymers with specific ratio of ethyl(acrylate), EA, and methyl(acrylate), MA, were used. It is known that poly(ethyl)acrylate, PEA, triggers a network-like conformation of FN upon adsorption, whereas poly(methyl)acrylate, PMA, elicits a globular conformation. It was found that a different degree of FN organisation can be obtained dependent on the EA/MA ratio, with the network being more connected with increased EA ratio. This differential conformation was shown to affect the availability of critical binding sites. This system was further used to study hMSCs response in terms of adhesion and osteogenic differentiation. All surfaces support cell growth and focal adhesion formation. However, increased cell size and spreading was promoted on surfaces with higher EA concentration. Next, the potential of the surfaces after sequential adsorption of FN and the growth factor BMP-2 to drive osteogenic commitment was explored. Enhanced expression of the osteogenic markers RUNX2 and OCN was found with higher concentration of EA whereas the opposite was observed with ALP expression. Another part of this work involved investigating cell migration on PEA and PMA. Higher cell speed was found on PEA where FN adopts a more extended conformation. Moreover, the protein composition of focal adhesions was evaluated by proteomic analysis. The findings of this work give further insights into how the surface with well-defined chemical properties can modulate FN conformation and how these changes affect cellular processes.
82

Feasibility of the motorized momentum exchange tether system : an investigation of system risk

Draper, Christopher Hayward January 2006 (has links)
This thesis examines the feasibility of a motorized momentum exchange tether (MMET) system being used to perform commercial space launches. The MMET system is an on-orbit launch concept that could be used to reduce the cost of access to space, thereby catalysing a broader range of space-enabled business concepts. The research presented in this thesis assumes this cost of access to space for a reasonable launch system can be presented as the adverse financial risk of its operation. Under this assumption, the research concludes that an MMET-based system would be a feasible alternative to an equivalently capable conventional system if the risk associated with the system is less than that associated with the alternative. To illustrate the concepts and approaches presented within, this thesis presents an assessment of the proposed Lunar Staged MMET (LSM) mission, an assessment that indicates the MMET is a feasible alternative for completing such a mission under specific analytical and market conditions. The expected financial risk is presented in this thesis as the product of the mission cost and the probability of mission failure. The cost of each mission is calculated from the perspective of the end customer, and the long-term price of such services is computed using publicly available data and the assumption that the commercial space industry can be modelled as an oligopoly. Support for such a model is contained in the literature and through this research, which compares the quarterly financial data published by the Boeing Company against the international commercial launch rate. The probability of system failure associated with an MMET-based unconventional launch system must account for a number of factors. For the first, conventional stage of the system, assessing the probability of stage failure is found through an examination of observed failure rates relative to conventional engineering reliability estimates for conventional launch vehicles. Through this examination, a novel approach to calculating the rate at which the probability of failure for vehicles produced within a variant class changes as a function of time is presented, an approach that offers a valid technique for applying reliability growth across a series of vehicles that are best considered to be independent vehicles. The thesis goes on to present the results of research into various component aspects that are vital to the design and analysis of a tether-based system. First, the research explores the strength of tethers modelled as braided aramid ropes, which supports claims of strain dependence regarding aramid fibre strength that can have significant strength benefits and indicates that this phenomenon should be accounted for in any operational architecture. Second, the thesis presents an empirical hypervelocity impact effects equation calibrated for use with tethers, which indicates that the currently accepted approach to oblique hypervelocity impacts may not be appropriate for tether analyses. Thirdly, research into fractured impactor dispersion after a hypervelocity impact on tether targets is presented, which indicates that the commonly accepted one-impact- one-failure assumption employed for multi-line tether analyses may not be sufficient. TetherLife, an analytical program developed to calculate the expected lifetime of an MMET system given various sub-span parameters, employs the products of these research areas to calculate the mean time to failure for a range of tether sizes and orientations. After combining the probability of failure associated with the conventional launch vehicle component of the MMET-based unconventional launch system, the probability of failure associated with the MMET system, the probability or failure associated with handing a payload between systems, and the likely cost of deploying a suitable set of MMET systems, a comparison can be made between the financial risk associated with completing a specific mission using an MMET based unconventional launch system verses completion of the same mission using conventional means. For the LSM mission examined within the research, an MMET-based system would be a reasonable option if an average of 85 missions per year are required, contingent on specific analytical assumptions. While such a number of lunar supply missions are not currently required, the conclusion that the MMET system can be an alternative to a conventional system under various circumstances offers support for continuing current research on system design and analysis.
83

Viscoelastic relaxation in polymers with special reference to behaviour at audio frequencies

Lindon, Peter January 1965 (has links)
An electromagnetic transducer has been developed to measure the complex dynamic shear modulus of viscoelastic liquids as a function of frequency in the range 20c/s - 1.5Kc/s The test liquid is subjected to an oscillatory shear strain in an annular gap, and the variation of loading on the moving boundary as a function of the height of liquid in the annulus is reflected as a change in transfer Impedance at the transducer terminals. This change in electrical impedance may then be used to calculate the shear properties of the test liquid. The liquids investigated were four polydimethyl siloxane fluids of differing molecular weight. Measurements previously made on these fluids at higher frequencies have been extrapolated to low frequencies on the basis of a modified theory of Rouse and it is shown that these extrapolations coincide well with the low frequent experimental determinationse A theory has also been developed to attempt a correlation between the non-Newtonian behaviour of viscoelastic liquids under the influence of steady shear flow with the dynamic shear moduli. It appears that there is a functional relationship connecting the shear and normal stresses as a function of shear rate with the real and imaginary parts of the complex shear modulus as a function of angular frequency. In addition, the recoverable elastic shear strain in steady flow appears in the resulting equations and shows that the properties in oscillatory shear do not completely specify the behaviour in steady shear flow. Some comparison of the theory with experiment is given. Finally, some attention has been given to means of automatically calculating relaxation spectra from dynamic modulus data. Although various methods of performing this calculation have already been described, they usually involve laborious hand computation and are not amenable to direct programming for use on a computer. Two new methods are described one of which need involve only a simple hand calculation after a certain matrix has been pre-calculated This matrix does not depend on the data values and so needs only to be calculated once.
84

Undrained shear strength of ultra-soft soils admixed with lime

Al-Alwan, Asad A. Khedheyer January 2019 (has links)
This thesis describes the results of a study on the undrained shear strength (Cu) of ultra-soft clay soils in admixtures of calcium hydroxide (slaked-lime). The pozzolanic gains in strength over time, over periods as long as one year were recorded. The undrained shear strengths were measured primarily using penetration tests: a Tinius Olsen desk-top compression machine was modified to conduct these constant-rate of strain tests, using circular disc penetrometers. Measured bearing resistances were interpreted in terms of undrained shear strengths: data from the literature, as well as some finite element analyses, were employed to establish the necessary depth-dependent correlations. The strength testing programme was supplemented by triaxial compression and vane shear tests. The parametric study of the factors affecting the strength of lime-admixed clay slurries included soil type, water content, lime content, curing time, and curing temperature. The results show how the rate of strength gain is affected by soil mineralogy. The greatest strength gains can only occur if sufficient clay fractions are present to utilize any unbound additive and conversely sufficient additive is present. For clays, samples prepared at the same water content/ liquid limit ratio (W=w /wLL) produced approximately the same undrained shear strength after one year of curing. Tests were also conducted on remoulded samples: as expected, these admixed soils have high sensitivity. However, remoulding is not achieved without the expenditure of considerable work. Moreover, the remoulded strengths remain some orders of magnitude higher than their untreated counterparts. Diffusion of additive from the admixture into surrounding water was observed; this was manifest in softening of the near-surface material and over a period of one year extended to depths of the order of 10 cm depending on lime content. Curing temperature has a significant effect on the rate of strength development. Lower curing temperatures retard strength development while higher temperatures have the opposite effect. The Arrhenius model for the rates of chemical reactions describes this temperature dependent phenomenon very satisfactorily. Finite element studies, including small-strain Lagrangian and coupled Eulerian-Lagrangian large-displacement formulations (incorporated within ABAQUS) were conducted to investigate whether penetrometer data interpretation required consideration of the finite size of the test chamber. These numerical results tended to confirm the experimental finding that penetrometer disk diameters up to 30 mm were sufficiently small to be unaffected by constraints imposed by the test chambers. In addition, oedometer testing was carried out on both intact and remoulded samples. The former revealed the existence of reasonably well-defined "yield stresses", which were found to correlate well with the corresponding undrained shear strengths. The compression and swell indices were found to be largely dependent on soil type and correspondingly unaffected by lime content.
85

Evaluation of nanopore-based sequencing technology for gene marker based analysis of complex microbial communities : method development for accurate 16S rRNA gene amplicon sequencing

Calus, Szymon Tomasz January 2018 (has links)
Nucleic acid sequencing can provide a detailed overview of microbial communities in comparison with standard plate-culture methods. Expansion of high-throughput sequencing (HTS) technologies and reduction in analysis costs has allowed for detailed exploration of various habitats with use of amplicon, metagenomics, and metatranscriptomics approaches. However, due to a capital cost of HTS platforms and requirements for batch analysis, genomics-based studies are still not being used as a standard method for the comprehensive examination of environmental or clinical samples for microbial characterization. This research project investigated the potential of a novel nanopore-based sequencing platform from Oxford Nanopore Technologies (ONT) for rapid and accurate analysis of various environmentally complex samples. ONT is an emerging company that developed the first-ever portable nanopore-based sequencing platform called MinIONTM. Portability and miniaturised size of the device gives an immense opportunity for de-centralised, in-field, and real-time analysis of environmental and clinical samples. Nonetheless, benchmarking of this new technology against the current gold-standard platform (i.e., Illumina sequencers) is necessary to evaluate nanopore data and understand its benefits and limitations. The focus of this study is on the evaluation of nanopore sequencing data: read quality, sequencing errors, alignment quality but also bacterial community structure. For this reason, mock bacterial community samples were generated, sequenced and analysed with use of multiple bioinformatics approaches. Furthermore, this study developed sophisticated library preparation and data analyses methods to enable high-accuracy analysis of amplicon libraries from complex microbial communities for sequencing on the nanopore platform. Besides, the best performing library preparation and data analyses methods were used for analysis of environmental samples and compared to high-quality Illumina metagenomics data. This work opens a new possibility for accurate, in-field amplicon analysis of complex samples with the use of MinIONTM and for the development of autonomous biosensing technology for culture-free detection of pathogenic and non-pathogenic microorganisms in water, soil, food, drinks or blood.
86

Viscosity measurements at pressures up to 14,000 bar using an automatic falling cylinder viscometer

Irving, John Bruce January 1977 (has links)
The thesis describes a new method for measuring the viscosity of liquids in a pressure vessel capable of reaching 14 000 bar, and results are presented for six liquids at 30°C, up to viscosities of 3000 P. The technique is based on the well-tried principle of a cylindrical sinker falling in a viscometer tube. It departs from earlier systems in that the sinker is retrieved electromagnetically rather than by rotating the whole pressure vessel, and the sinker is held by a semi-permanent magnet before a fall time measurement is made. The sinkers do not have guiding pins, but rely on self-centering forces to ensure concentric fall. Another novel aspect is that a sinker with a central hole to produce faster fall times has been introduced for the first time. An analysis for such a sinker is presented, and when the diameter of the hole is mathematically reduced to zero, the equation of motion for the solid sinker is obtained. The solution for the solid cylinder is compared with earlier approximate analyses. The whole cycle of operation - retrieval, holding, releasing, sinker detection, and recording is remotely controlled and entirely automated. With unguided falling weights it is essential that the viscometer tube is aligned vertically. The effects of non-vertical alignment are assessed both experimentally and theoretically. An original analysis is presented to explain the rather surprising finding that when a viscometer tube is inclined from the vertical, the sinker falls much more quickly. The agreement between experiment and theory is to within one per cent. From the analysis of sinker motion, appropriate allowances for the change in sinker and viscometer tube dimensions under pressure are calculated; these are substantially linear with pressure. The viscometer was calibrated at atmospheric pressure with a variety of liquids whose viscosities were ascertained with calibrated suspended-level viscometers. Excellent linearity over three decades of viscosity was found for both sinkers. A careful analysis of errors shows that the absolute accuracy of measurement is to within ±1.8 per cent. The fall time of the sinker is also a function of the buoyancy of the test liquid. Therefore a knowledge of the liquid density is required, both at atmospheric pressure and at elevated pressures. The linear differential transformer method for density measurement formed the basis of a new apparatus designed to fit into the high pressure vessel. Up to pressures of 5 kbar measurements are estimated to be within ±0.14 per cent, and above this pressure uncertainty could be as high as 0.25 per cent. The last chapter deals with empirical and semi-theoretical viscosity-pressure equations. Two significant contributions are offered. The first is a new interpretation of the free volume equation in which physically realistic values of the limiting specific volume, vo, are derived by applying viscosity and density data to the equation iso-barically, not isothermally as most have done in the past. This led to a further simplification of the free volume equation to a two constant equation. The second contribution is a purely empirical equation which describes the variation of viscosity as a function of pressure: ln(η/ηo)t = A(eBP - e-KP) where no is the viscosity at atmospheric pressure, and A, B and K are constants. This 'double-exponential' equation is shown to describe data to within experimental error for viscosities which vary by as much as four decades with pressure. It also describes the different curvatures which the logarithm of viscosity exhibits when plotted as a function of pressure: concave towards the pressure axis, convex, straight line, or concave and then convex. The many other equations in existence cannot describe this variety of behaviour.
87

Computational Fluid Dynamics (CFD) based investigations on the flow of capsules in vertical hydraulic pipelines

Algadi, Abdualmagid January 2017 (has links)
The rapid depletion of power sources has remarkably impacted the transport sector, where the costs of the freight transportation are rising dramatically every year. Significant endeavours have been made to develop innovative means of transport that can be adopted for economic and environmental friendly operating systems. Transport pipelines consider one such alternative mode that can be used to transfer goods. Although the flow behaviour of a solidliquid mixture in hydraulic capsule pipeline is quite complicated, due to its dependence on a large number of geometrical and dynamic parameters, it is still a subject of active research. In addition, published literature is extremely limited in terms of identifying the impacts of the capsules shape on the flow characteristics of pipelines. The shape of these capsules has a significant effect on the hydrodynamic behaviour within such pipelines. This thesis presents a computational investigation employing advanced Computational Fluid Dynamics (CFD) based tool to simulate the capsules flow of varied shapes quantified in form of a novel shape factor in a vertical hydraulic capsule pipeline. The 3-D Dynamic Meshing technique with Six Degrees of Freedom approach is applied for numerical simulation of unsteady flow fields in vertical capsule pipelines. Variations in flow related parameters within the pipeline have been discussed in detail for geometrical parameters associated with the capsules and flow conditions within Hydraulic Capsule Pipelines (HCPs). Detailed quantitative and qualitative analyse has been conducted in the current research. The qualitative analysis of the field of the flow comprises descriptions of the pressure and velocity distribution within the pipeline. The investigations have been conducted on the flow of spherical, cylindrical and rectangular shaped capsules each one separately for offshore applications. As it can be notice that the flow behaviour inside HCP relies on the flow conditions and geometric parameters. The development of novel predictive models for pressure drop and capsule velocity is considered as one of the goals that have been achieved in this research. Moreover, the flow of a variety of different shaped capsules, in combination, has also been investigated based on the impact of the order of the capsule shape within the vertical pipeline. It has been found that the motion of mixed capsules along the pipeline shows a significant variation comparing to the basic capsules shapes for the same shape being transported across the pipelines. Capsule pipeline designers need accurate data regarding the pressure drop, holdup and the shape of the capsules etc., at early design phases. The methodology of optimisation is developed based on the least cost principle for vertical HCPs. The inputs to the predictive models are the shape factor of the capsule and solid throughput demanded of the system, while the outcomes represent the pumping power demanded for the capsule transportation process and the optimal diameter of the HCP. In the present study, a complete visualisation of capsules flow and design of vertical hydraulic capsule pipelines has been reported. Sophisticated computational tools have allowed the possibility to analyse and map the flow structure in an HCP, which resulted to a deeper comprehension of the flow behaviour and trajectory of the capsules in vertical pipes.
88

Intrusion detection in SCADA systems using machine learning techniques

Maglaras, Leandros January 2018 (has links)
Modern Supervisory Control and Data Acquisition (SCADA) systems are essential for monitoring and managing electric power generation, transmission and distribution. In the age of the Internet of Things, SCADA has evolved into big, complex and distributed systems that are prone to conventional in addition to new threats. So as to detect intruders in a timely and efficient manner a real time detection mechanism, capable of dealing with a range of forms of attacks is highly salient. Such a mechanism has to be distributed, low cost, precise, reliable and secure, with a low communication overhead, thereby not interfering in the industrial system’s operation. In this commentary two distributed Intrusion Detection Systems (IDSs) which are able to detect attacks that occur in a SCADA system are proposed, both developed and evaluated for the purposes of the CockpitCI project. The CockpitCI project proposes an architecture based on real-time Perimeter Intrusion Detection System (PIDS), which provides the core cyber-analysis and detection capabilities, being responsible for continuously assessing and protecting the electronic security perimeter of each CI. Part of the PIDS that was developed for the purposes of the CockpitCI project, is the OCSVM module. During the duration of the project two novel OCSVM modules were developed and tested using datasets from a small-scale testbed that was created, providing the means to mimic a SCADA system operating both in normal conditions and under the influence of cyberattacks. The first method, namely K-OCSVM, can distinguish real from false alarms using the OCSVM method with default values for parameters ν and σ combined with a recursive K-means clustering method. The K-OCSVM is very different from all similar methods that required pre-selection of parameters with the use of cross-validation or other methods that ensemble outcomes of one class classifiers. Building on the K-OCSVM and trying to cope with the high requirements that were imposed from the CockpitCi project, both in terms of accuracy and time overhead, a second method, namely IT-OCSVM is presented. IT-OCSVM method is capable of performing outlier detection with high accuracy and low overhead within a temporal window, adequate for the nature of SCADA systems. The two presented methods are performing well under several attack scenarios. Having to balance between high accuracy, low false alarm rate, real time communication requirements and low overhead, under complex and usually persistent attack situations, a combination of several techniques is needed. Despite the range of intrusion detection activities, it has been proven that half of these have human error at their core. An increased empirical and theoretical research into human aspects of cyber security based on the volumes of human error related incidents can enhance cyber security capabilities of modern systems. In order to strengthen the security of SCADA systems, another solution is to deliver defence in depth by layering security controls so as to reduce the risk to the assets being protected.
89

Real-time link quality estimation and holistic transmission power control for wireless sensor networks

Hughes, Jack Bryan January 2018 (has links)
Wireless sensor networks (WSNs) are becoming widely adopted across multiple industries to implement sensor and non-critical control applications. These networks of smart sensors and actuators require energy efficient and reliable operation to meet application requirements. Regulatory body restrictions, hardware resource constraints and an increasingly crowded network space makes realising these requirements a significant challenge. Transmission power control (TPC) protocols are poised for wide spread adoption in WSNs to address energy constraints and prolong the lifetime of the networked devices. The complex and dynamic nature of the transmission medium; the processing and memory hardware resource constraints and the low channel throughput makes identifying the optimum transmission power a significant challenge. TPC protocols for WSNs are not well developed and previously published works suffer from a number of common deficiencies such as; having poor tuning agility, not being practical to implement on the resource constrained hardware and not accounting for the energy consumed by packet retransmissions. This has resulted in several WSN standards featuring support for TPC but no formal definition being given for its implementation. Addressing the deficiencies associated with current works is required to increase the adoption of TPC protocols in WSNs. In this thesis a novel holistic TPC protocol with the primary objective of increasing the energy efficiency of communication activities in WSNs is proposed, implemented and evaluated. Firstly, the opportunities for TPC protocols in WSN applications were evaluated through developing a mathematical model that compares transmission power against communication reliability and energy consumption. Applying this model to state-of-the-art (SoA) radio hardware and parameter values from current WSN standards, the maximum energy savings were quantified at up to 80% for links that belong to the connected region and up to 66% for links that belong to the transitional and disconnected regions. Applying the results from this study, previous assumptions that protocols and mechanisms, such as TPC, not being able to achieve significant energy savings at short communications distances are contested. This study showed that the greatest energy savings are achieved at short communication distances and under ideal channel conditions. An empirical characterisation of wireless link quality in typical WSN environments was conducted to identify and quantify the spatial and temporal factors which affect radio and link dynamics. The study found that wireless link quality exhibits complex, unique and dynamic tendencies which cannot be captured by simplistic theoretical models. Link quality must therefore be estimated online, in real-time, using resources internal to the network. An empirical characterisation of raw link quality metrics for evaluating channel quality, packet delivery and channel stability properties of a communication link was conducted. Using the recommendations from this study, a novel holistic TPC protocol (HTPC) which operates on a per-packet basis and features a dynamic algorithm is proposed. The optimal TP is estimated through combining channel quality and packet delivery properties to provide a real-time estimation of the minimum channel gain, and using the channel stability properties to implement an adaptive fade margin. Practical evaluations show that HTPC is adaptive to link quality changes and outperforms current TPC protocols by achieving higher energy efficiency without detrimentally affecting the communication reliability. When subjected to several common temporal variations, links implemented with HTPC consumed 38% less than the current practise of using a fixed maximum TP and between 18-39% less than current SoA TPC protocols. Through offline computations, HTPC was found to closely match the performance of the optimal link performance, with links implemented with HTPC only consuming 7.8% more energy than when the optimal TP is considered. On top of this, real-world implementations of HTPC show that it is practical to implement on the resource constrained hardware as a result of implementing simplistic metric evaluation techniques and requiring minimal numbers of samples. Comparing the performance and characteristics of HTPC against previous works, HTPC addresses the common deficiencies associated with current solutions and therefore presents an incremental improvement on SoA TPC protocols.
90

Investigations into the perception of vertical interchannel decorrelation in 3D surround sound reproduction

Gribben, Christopher January 2018 (has links)
The use of three-dimensional (3D) surround sound systems has seen a rapid increase over recent years. In two-dimensional (2D) loudspeaker formats (i.e. two-channel stereophony (stereo) and 5.1 Surround), horizontal interchannel decorrelation is a well-established technique for controlling the horizontal spread of a phantom image. Use of interchannel decorrelation can also be found within established two-to-five channel upmixing methods (stereo to 5.1). More recently, proprietary algorithms have been developed that perform 2D-to-3D upmixing, which presumably make use of interchannel decorrelation as well; however, it is not currently known how interchannel decorrelation is perceived in the vertical domain. From this, it is considered that formal investigations into the perception of vertical interchannel decorrelation are necessary. Findings from such experiments may contribute to the improved control of a sound source within 3D surround systems (i.e. the vertical spread), in addition to aiding the optimisation of 2D-to-3D upmixing algorithms. The current thesis presents a series of experiments that systematically assess vertical interchannel decorrelation under various conditions. Firstly, a comparison is made between horizontal and vertical interchannel decorrelation, where it is found that vertical decorrelation is weaker than horizontal decorrelation. However, it is also seen that vertical decorrelation can generate a significant increase of vertical image spread (VIS) for some conditions. Following this, vertical decorrelation is assessed for octave-band pink noise stimuli at various azimuth angles to the listener. The results demonstrate that vertical decorrelation is dependent on both frequency and presentation angle – a general relationship between the interchannel cross-correlation (ICC) and VIS is observed for the 500 Hz octave-band and above, and strongest for the 8 kHz octave-band. Objective analysis of these stimuli signals determined that spectral changes at higher frequencies appear to be associated with VIS perception – at 0° azimuth, the 8 and 16 kHz octave-bands demonstrate potential spectral cues, at ±30°, similar cues are seen in the 4, 8 and 16 kHz bands, and from ±110°, cues are featured in the 2, 4, 8 and 16 kHz bands. In the case of the 8 kHz octave-band, it seems that vertical decorrelation causes a ‘filling in’ of vertical localisation notch cues, potentially resulting in ambiguous perception of vertical extent. In contrast, the objective analysis suggests that VIS perception of the 500 Hz and 1 kHz bands may have been related to early reflections in the listening room. From the experiments above, it is demonstrated that the perception of VIS from vertical interchannel decorrelation is frequency-dependent, with high frequencies playing a particularly important role. A following experiment explores the vertical decorrelation of high frequencies only, where it is seen that decorrelation of the 500 Hz octave-band and above produces a similar perception of VIS to broadband decorrelation, whilst improving tonal quality. The results also indicate that decorrelation of the 8 kHz octave-band and above alone can significantly increase VIS, provided the source signal has sufficient high frequency energy. The final experimental chapter of the present thesis aims to provide a controlled assessment of 2D-to-3D upmixing, taking into account the findings of the previous experiments. In general, 2D-to-3D upmixing by vertical interchannel decorrelation had little impact on listener envelopment (LEV), when compared against a level-matched 2D 5.1 reference. Furthermore, amplitude-based decorrelation appeared to be marginally more effective, and ‘high-pass decorrelation’ resulted in slightly better tonal quality for sources that featured greater low frequency energy.

Page generated in 0.045 seconds