• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2755
  • 1633
  • 751
  • 260
  • 231
  • 206
  • 134
  • 96
  • 75
  • 62
  • 55
  • 33
  • 33
  • 33
  • 33
  • Tagged with
  • 8028
  • 1654
  • 1177
  • 1115
  • 845
  • 737
  • 676
  • 494
  • 430
  • 426
  • 414
  • 398
  • 397
  • 386
  • 386
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Viscoelastic relaxation in polymers with special reference to behaviour at audio frequencies

Lindon, Peter January 1965 (has links)
An electromagnetic transducer has been developed to measure the complex dynamic shear modulus of viscoelastic liquids as a function of frequency in the range 20c/s - 1.5Kc/s The test liquid is subjected to an oscillatory shear strain in an annular gap, and the variation of loading on the moving boundary as a function of the height of liquid in the annulus is reflected as a change in transfer Impedance at the transducer terminals. This change in electrical impedance may then be used to calculate the shear properties of the test liquid. The liquids investigated were four polydimethyl siloxane fluids of differing molecular weight. Measurements previously made on these fluids at higher frequencies have been extrapolated to low frequencies on the basis of a modified theory of Rouse and it is shown that these extrapolations coincide well with the low frequent experimental determinationse A theory has also been developed to attempt a correlation between the non-Newtonian behaviour of viscoelastic liquids under the influence of steady shear flow with the dynamic shear moduli. It appears that there is a functional relationship connecting the shear and normal stresses as a function of shear rate with the real and imaginary parts of the complex shear modulus as a function of angular frequency. In addition, the recoverable elastic shear strain in steady flow appears in the resulting equations and shows that the properties in oscillatory shear do not completely specify the behaviour in steady shear flow. Some comparison of the theory with experiment is given. Finally, some attention has been given to means of automatically calculating relaxation spectra from dynamic modulus data. Although various methods of performing this calculation have already been described, they usually involve laborious hand computation and are not amenable to direct programming for use on a computer. Two new methods are described one of which need involve only a simple hand calculation after a certain matrix has been pre-calculated This matrix does not depend on the data values and so needs only to be calculated once.
192

Undrained shear strength of ultra-soft soils admixed with lime

Al-Alwan, Asad A. Khedheyer January 2019 (has links)
This thesis describes the results of a study on the undrained shear strength (Cu) of ultra-soft clay soils in admixtures of calcium hydroxide (slaked-lime). The pozzolanic gains in strength over time, over periods as long as one year were recorded. The undrained shear strengths were measured primarily using penetration tests: a Tinius Olsen desk-top compression machine was modified to conduct these constant-rate of strain tests, using circular disc penetrometers. Measured bearing resistances were interpreted in terms of undrained shear strengths: data from the literature, as well as some finite element analyses, were employed to establish the necessary depth-dependent correlations. The strength testing programme was supplemented by triaxial compression and vane shear tests. The parametric study of the factors affecting the strength of lime-admixed clay slurries included soil type, water content, lime content, curing time, and curing temperature. The results show how the rate of strength gain is affected by soil mineralogy. The greatest strength gains can only occur if sufficient clay fractions are present to utilize any unbound additive and conversely sufficient additive is present. For clays, samples prepared at the same water content/ liquid limit ratio (W=w /wLL) produced approximately the same undrained shear strength after one year of curing. Tests were also conducted on remoulded samples: as expected, these admixed soils have high sensitivity. However, remoulding is not achieved without the expenditure of considerable work. Moreover, the remoulded strengths remain some orders of magnitude higher than their untreated counterparts. Diffusion of additive from the admixture into surrounding water was observed; this was manifest in softening of the near-surface material and over a period of one year extended to depths of the order of 10 cm depending on lime content. Curing temperature has a significant effect on the rate of strength development. Lower curing temperatures retard strength development while higher temperatures have the opposite effect. The Arrhenius model for the rates of chemical reactions describes this temperature dependent phenomenon very satisfactorily. Finite element studies, including small-strain Lagrangian and coupled Eulerian-Lagrangian large-displacement formulations (incorporated within ABAQUS) were conducted to investigate whether penetrometer data interpretation required consideration of the finite size of the test chamber. These numerical results tended to confirm the experimental finding that penetrometer disk diameters up to 30 mm were sufficiently small to be unaffected by constraints imposed by the test chambers. In addition, oedometer testing was carried out on both intact and remoulded samples. The former revealed the existence of reasonably well-defined "yield stresses", which were found to correlate well with the corresponding undrained shear strengths. The compression and swell indices were found to be largely dependent on soil type and correspondingly unaffected by lime content.
193

Evaluation of nanopore-based sequencing technology for gene marker based analysis of complex microbial communities : method development for accurate 16S rRNA gene amplicon sequencing

Calus, Szymon Tomasz January 2018 (has links)
Nucleic acid sequencing can provide a detailed overview of microbial communities in comparison with standard plate-culture methods. Expansion of high-throughput sequencing (HTS) technologies and reduction in analysis costs has allowed for detailed exploration of various habitats with use of amplicon, metagenomics, and metatranscriptomics approaches. However, due to a capital cost of HTS platforms and requirements for batch analysis, genomics-based studies are still not being used as a standard method for the comprehensive examination of environmental or clinical samples for microbial characterization. This research project investigated the potential of a novel nanopore-based sequencing platform from Oxford Nanopore Technologies (ONT) for rapid and accurate analysis of various environmentally complex samples. ONT is an emerging company that developed the first-ever portable nanopore-based sequencing platform called MinIONTM. Portability and miniaturised size of the device gives an immense opportunity for de-centralised, in-field, and real-time analysis of environmental and clinical samples. Nonetheless, benchmarking of this new technology against the current gold-standard platform (i.e., Illumina sequencers) is necessary to evaluate nanopore data and understand its benefits and limitations. The focus of this study is on the evaluation of nanopore sequencing data: read quality, sequencing errors, alignment quality but also bacterial community structure. For this reason, mock bacterial community samples were generated, sequenced and analysed with use of multiple bioinformatics approaches. Furthermore, this study developed sophisticated library preparation and data analyses methods to enable high-accuracy analysis of amplicon libraries from complex microbial communities for sequencing on the nanopore platform. Besides, the best performing library preparation and data analyses methods were used for analysis of environmental samples and compared to high-quality Illumina metagenomics data. This work opens a new possibility for accurate, in-field amplicon analysis of complex samples with the use of MinIONTM and for the development of autonomous biosensing technology for culture-free detection of pathogenic and non-pathogenic microorganisms in water, soil, food, drinks or blood.
194

Viscosity measurements at pressures up to 14,000 bar using an automatic falling cylinder viscometer

Irving, John Bruce January 1977 (has links)
The thesis describes a new method for measuring the viscosity of liquids in a pressure vessel capable of reaching 14 000 bar, and results are presented for six liquids at 30°C, up to viscosities of 3000 P. The technique is based on the well-tried principle of a cylindrical sinker falling in a viscometer tube. It departs from earlier systems in that the sinker is retrieved electromagnetically rather than by rotating the whole pressure vessel, and the sinker is held by a semi-permanent magnet before a fall time measurement is made. The sinkers do not have guiding pins, but rely on self-centering forces to ensure concentric fall. Another novel aspect is that a sinker with a central hole to produce faster fall times has been introduced for the first time. An analysis for such a sinker is presented, and when the diameter of the hole is mathematically reduced to zero, the equation of motion for the solid sinker is obtained. The solution for the solid cylinder is compared with earlier approximate analyses. The whole cycle of operation - retrieval, holding, releasing, sinker detection, and recording is remotely controlled and entirely automated. With unguided falling weights it is essential that the viscometer tube is aligned vertically. The effects of non-vertical alignment are assessed both experimentally and theoretically. An original analysis is presented to explain the rather surprising finding that when a viscometer tube is inclined from the vertical, the sinker falls much more quickly. The agreement between experiment and theory is to within one per cent. From the analysis of sinker motion, appropriate allowances for the change in sinker and viscometer tube dimensions under pressure are calculated; these are substantially linear with pressure. The viscometer was calibrated at atmospheric pressure with a variety of liquids whose viscosities were ascertained with calibrated suspended-level viscometers. Excellent linearity over three decades of viscosity was found for both sinkers. A careful analysis of errors shows that the absolute accuracy of measurement is to within ±1.8 per cent. The fall time of the sinker is also a function of the buoyancy of the test liquid. Therefore a knowledge of the liquid density is required, both at atmospheric pressure and at elevated pressures. The linear differential transformer method for density measurement formed the basis of a new apparatus designed to fit into the high pressure vessel. Up to pressures of 5 kbar measurements are estimated to be within ±0.14 per cent, and above this pressure uncertainty could be as high as 0.25 per cent. The last chapter deals with empirical and semi-theoretical viscosity-pressure equations. Two significant contributions are offered. The first is a new interpretation of the free volume equation in which physically realistic values of the limiting specific volume, vo, are derived by applying viscosity and density data to the equation iso-barically, not isothermally as most have done in the past. This led to a further simplification of the free volume equation to a two constant equation. The second contribution is a purely empirical equation which describes the variation of viscosity as a function of pressure: ln(η/ηo)t = A(eBP - e-KP) where no is the viscosity at atmospheric pressure, and A, B and K are constants. This 'double-exponential' equation is shown to describe data to within experimental error for viscosities which vary by as much as four decades with pressure. It also describes the different curvatures which the logarithm of viscosity exhibits when plotted as a function of pressure: concave towards the pressure axis, convex, straight line, or concave and then convex. The many other equations in existence cannot describe this variety of behaviour.
195

Computational Fluid Dynamics (CFD) based investigations on the flow of capsules in vertical hydraulic pipelines

Algadi, Abdualmagid January 2017 (has links)
The rapid depletion of power sources has remarkably impacted the transport sector, where the costs of the freight transportation are rising dramatically every year. Significant endeavours have been made to develop innovative means of transport that can be adopted for economic and environmental friendly operating systems. Transport pipelines consider one such alternative mode that can be used to transfer goods. Although the flow behaviour of a solidliquid mixture in hydraulic capsule pipeline is quite complicated, due to its dependence on a large number of geometrical and dynamic parameters, it is still a subject of active research. In addition, published literature is extremely limited in terms of identifying the impacts of the capsules shape on the flow characteristics of pipelines. The shape of these capsules has a significant effect on the hydrodynamic behaviour within such pipelines. This thesis presents a computational investigation employing advanced Computational Fluid Dynamics (CFD) based tool to simulate the capsules flow of varied shapes quantified in form of a novel shape factor in a vertical hydraulic capsule pipeline. The 3-D Dynamic Meshing technique with Six Degrees of Freedom approach is applied for numerical simulation of unsteady flow fields in vertical capsule pipelines. Variations in flow related parameters within the pipeline have been discussed in detail for geometrical parameters associated with the capsules and flow conditions within Hydraulic Capsule Pipelines (HCPs). Detailed quantitative and qualitative analyse has been conducted in the current research. The qualitative analysis of the field of the flow comprises descriptions of the pressure and velocity distribution within the pipeline. The investigations have been conducted on the flow of spherical, cylindrical and rectangular shaped capsules each one separately for offshore applications. As it can be notice that the flow behaviour inside HCP relies on the flow conditions and geometric parameters. The development of novel predictive models for pressure drop and capsule velocity is considered as one of the goals that have been achieved in this research. Moreover, the flow of a variety of different shaped capsules, in combination, has also been investigated based on the impact of the order of the capsule shape within the vertical pipeline. It has been found that the motion of mixed capsules along the pipeline shows a significant variation comparing to the basic capsules shapes for the same shape being transported across the pipelines. Capsule pipeline designers need accurate data regarding the pressure drop, holdup and the shape of the capsules etc., at early design phases. The methodology of optimisation is developed based on the least cost principle for vertical HCPs. The inputs to the predictive models are the shape factor of the capsule and solid throughput demanded of the system, while the outcomes represent the pumping power demanded for the capsule transportation process and the optimal diameter of the HCP. In the present study, a complete visualisation of capsules flow and design of vertical hydraulic capsule pipelines has been reported. Sophisticated computational tools have allowed the possibility to analyse and map the flow structure in an HCP, which resulted to a deeper comprehension of the flow behaviour and trajectory of the capsules in vertical pipes.
196

Intrusion detection in SCADA systems using machine learning techniques

Maglaras, Leandros January 2018 (has links)
Modern Supervisory Control and Data Acquisition (SCADA) systems are essential for monitoring and managing electric power generation, transmission and distribution. In the age of the Internet of Things, SCADA has evolved into big, complex and distributed systems that are prone to conventional in addition to new threats. So as to detect intruders in a timely and efficient manner a real time detection mechanism, capable of dealing with a range of forms of attacks is highly salient. Such a mechanism has to be distributed, low cost, precise, reliable and secure, with a low communication overhead, thereby not interfering in the industrial system’s operation. In this commentary two distributed Intrusion Detection Systems (IDSs) which are able to detect attacks that occur in a SCADA system are proposed, both developed and evaluated for the purposes of the CockpitCI project. The CockpitCI project proposes an architecture based on real-time Perimeter Intrusion Detection System (PIDS), which provides the core cyber-analysis and detection capabilities, being responsible for continuously assessing and protecting the electronic security perimeter of each CI. Part of the PIDS that was developed for the purposes of the CockpitCI project, is the OCSVM module. During the duration of the project two novel OCSVM modules were developed and tested using datasets from a small-scale testbed that was created, providing the means to mimic a SCADA system operating both in normal conditions and under the influence of cyberattacks. The first method, namely K-OCSVM, can distinguish real from false alarms using the OCSVM method with default values for parameters ν and σ combined with a recursive K-means clustering method. The K-OCSVM is very different from all similar methods that required pre-selection of parameters with the use of cross-validation or other methods that ensemble outcomes of one class classifiers. Building on the K-OCSVM and trying to cope with the high requirements that were imposed from the CockpitCi project, both in terms of accuracy and time overhead, a second method, namely IT-OCSVM is presented. IT-OCSVM method is capable of performing outlier detection with high accuracy and low overhead within a temporal window, adequate for the nature of SCADA systems. The two presented methods are performing well under several attack scenarios. Having to balance between high accuracy, low false alarm rate, real time communication requirements and low overhead, under complex and usually persistent attack situations, a combination of several techniques is needed. Despite the range of intrusion detection activities, it has been proven that half of these have human error at their core. An increased empirical and theoretical research into human aspects of cyber security based on the volumes of human error related incidents can enhance cyber security capabilities of modern systems. In order to strengthen the security of SCADA systems, another solution is to deliver defence in depth by layering security controls so as to reduce the risk to the assets being protected.
197

Real-time link quality estimation and holistic transmission power control for wireless sensor networks

Hughes, Jack Bryan January 2018 (has links)
Wireless sensor networks (WSNs) are becoming widely adopted across multiple industries to implement sensor and non-critical control applications. These networks of smart sensors and actuators require energy efficient and reliable operation to meet application requirements. Regulatory body restrictions, hardware resource constraints and an increasingly crowded network space makes realising these requirements a significant challenge. Transmission power control (TPC) protocols are poised for wide spread adoption in WSNs to address energy constraints and prolong the lifetime of the networked devices. The complex and dynamic nature of the transmission medium; the processing and memory hardware resource constraints and the low channel throughput makes identifying the optimum transmission power a significant challenge. TPC protocols for WSNs are not well developed and previously published works suffer from a number of common deficiencies such as; having poor tuning agility, not being practical to implement on the resource constrained hardware and not accounting for the energy consumed by packet retransmissions. This has resulted in several WSN standards featuring support for TPC but no formal definition being given for its implementation. Addressing the deficiencies associated with current works is required to increase the adoption of TPC protocols in WSNs. In this thesis a novel holistic TPC protocol with the primary objective of increasing the energy efficiency of communication activities in WSNs is proposed, implemented and evaluated. Firstly, the opportunities for TPC protocols in WSN applications were evaluated through developing a mathematical model that compares transmission power against communication reliability and energy consumption. Applying this model to state-of-the-art (SoA) radio hardware and parameter values from current WSN standards, the maximum energy savings were quantified at up to 80% for links that belong to the connected region and up to 66% for links that belong to the transitional and disconnected regions. Applying the results from this study, previous assumptions that protocols and mechanisms, such as TPC, not being able to achieve significant energy savings at short communications distances are contested. This study showed that the greatest energy savings are achieved at short communication distances and under ideal channel conditions. An empirical characterisation of wireless link quality in typical WSN environments was conducted to identify and quantify the spatial and temporal factors which affect radio and link dynamics. The study found that wireless link quality exhibits complex, unique and dynamic tendencies which cannot be captured by simplistic theoretical models. Link quality must therefore be estimated online, in real-time, using resources internal to the network. An empirical characterisation of raw link quality metrics for evaluating channel quality, packet delivery and channel stability properties of a communication link was conducted. Using the recommendations from this study, a novel holistic TPC protocol (HTPC) which operates on a per-packet basis and features a dynamic algorithm is proposed. The optimal TP is estimated through combining channel quality and packet delivery properties to provide a real-time estimation of the minimum channel gain, and using the channel stability properties to implement an adaptive fade margin. Practical evaluations show that HTPC is adaptive to link quality changes and outperforms current TPC protocols by achieving higher energy efficiency without detrimentally affecting the communication reliability. When subjected to several common temporal variations, links implemented with HTPC consumed 38% less than the current practise of using a fixed maximum TP and between 18-39% less than current SoA TPC protocols. Through offline computations, HTPC was found to closely match the performance of the optimal link performance, with links implemented with HTPC only consuming 7.8% more energy than when the optimal TP is considered. On top of this, real-world implementations of HTPC show that it is practical to implement on the resource constrained hardware as a result of implementing simplistic metric evaluation techniques and requiring minimal numbers of samples. Comparing the performance and characteristics of HTPC against previous works, HTPC addresses the common deficiencies associated with current solutions and therefore presents an incremental improvement on SoA TPC protocols.
198

Investigations into the perception of vertical interchannel decorrelation in 3D surround sound reproduction

Gribben, Christopher January 2018 (has links)
The use of three-dimensional (3D) surround sound systems has seen a rapid increase over recent years. In two-dimensional (2D) loudspeaker formats (i.e. two-channel stereophony (stereo) and 5.1 Surround), horizontal interchannel decorrelation is a well-established technique for controlling the horizontal spread of a phantom image. Use of interchannel decorrelation can also be found within established two-to-five channel upmixing methods (stereo to 5.1). More recently, proprietary algorithms have been developed that perform 2D-to-3D upmixing, which presumably make use of interchannel decorrelation as well; however, it is not currently known how interchannel decorrelation is perceived in the vertical domain. From this, it is considered that formal investigations into the perception of vertical interchannel decorrelation are necessary. Findings from such experiments may contribute to the improved control of a sound source within 3D surround systems (i.e. the vertical spread), in addition to aiding the optimisation of 2D-to-3D upmixing algorithms. The current thesis presents a series of experiments that systematically assess vertical interchannel decorrelation under various conditions. Firstly, a comparison is made between horizontal and vertical interchannel decorrelation, where it is found that vertical decorrelation is weaker than horizontal decorrelation. However, it is also seen that vertical decorrelation can generate a significant increase of vertical image spread (VIS) for some conditions. Following this, vertical decorrelation is assessed for octave-band pink noise stimuli at various azimuth angles to the listener. The results demonstrate that vertical decorrelation is dependent on both frequency and presentation angle – a general relationship between the interchannel cross-correlation (ICC) and VIS is observed for the 500 Hz octave-band and above, and strongest for the 8 kHz octave-band. Objective analysis of these stimuli signals determined that spectral changes at higher frequencies appear to be associated with VIS perception – at 0° azimuth, the 8 and 16 kHz octave-bands demonstrate potential spectral cues, at ±30°, similar cues are seen in the 4, 8 and 16 kHz bands, and from ±110°, cues are featured in the 2, 4, 8 and 16 kHz bands. In the case of the 8 kHz octave-band, it seems that vertical decorrelation causes a ‘filling in’ of vertical localisation notch cues, potentially resulting in ambiguous perception of vertical extent. In contrast, the objective analysis suggests that VIS perception of the 500 Hz and 1 kHz bands may have been related to early reflections in the listening room. From the experiments above, it is demonstrated that the perception of VIS from vertical interchannel decorrelation is frequency-dependent, with high frequencies playing a particularly important role. A following experiment explores the vertical decorrelation of high frequencies only, where it is seen that decorrelation of the 500 Hz octave-band and above produces a similar perception of VIS to broadband decorrelation, whilst improving tonal quality. The results also indicate that decorrelation of the 8 kHz octave-band and above alone can significantly increase VIS, provided the source signal has sufficient high frequency energy. The final experimental chapter of the present thesis aims to provide a controlled assessment of 2D-to-3D upmixing, taking into account the findings of the previous experiments. In general, 2D-to-3D upmixing by vertical interchannel decorrelation had little impact on listener envelopment (LEV), when compared against a level-matched 2D 5.1 reference. Furthermore, amplitude-based decorrelation appeared to be marginally more effective, and ‘high-pass decorrelation’ resulted in slightly better tonal quality for sources that featured greater low frequency energy.
199

Development and clinical testing of home-based brain-computer interfaces for neurofeedback and for rehabilitation

Al-Taleb, Manaf Kadum Hussein January 2018 (has links)
Many studies have shown that brain-computer interface (BCI) technology is a potentially powerful tool for the rehabilitation of various psychological and neurological conditions, including restoration of movement and treatment of neuropathic pain (NP). However, most of these studies rely on expensive equipment, limiting its application to labs and hospital environments. Therefore, making BCI applications more readily available to patients is the main focus of this thesis. The aim of this study is to develop and assess two inexpensive, wearable neurorehabilitation systems that can be used for patient managed home-based therapy and are based on a portable brain-computer interface (PBCI) for neurofeedback (NF) applications. Both systems are inspired by neurorehabilitation protocols that have been previously tested on patients using laboratory BCI technology. The brain-computer interface systems are based on a wireless EEG system called EPOC, a Windows PC tablet and custom-made software developed under Visual C++. Both of these systems consist of portable BCI, one for neurofeedback (BCI-NF) and the other for controlling functional electrical stimulation (BCI-FES). System development followed the standard steps of user-centred design, while system testing followed the procedures for adopting new services or technologies, aiming to increase the usability of the BCI system in a patient population. The assessment phase, and in particular the assessment of PBCI-NF, included a systematic analysis of the main requirements and barriers for providing home-based BCI as a patient service, including training and support. The results of these chapters provide important feedback on usage patterns and technical problems, which could not be collected based on patients’ BCI experiences in laboratory or clinical trials. The ability to self-regulate brain waves was tested on able-bodied participants and patients with NP. Within the user-centred design frame, the effectiveness, efficiency, and user acceptance of BCI-NF were demonstrated on patients. The treatment was found to be comparable with the effectiveness of widely used pain drugs, with 53% of patients experiencing a clinically significant reduction in pain. The feasibility BCI-FES study on able-bodied participants and SCI tetraplegic patients demonstrated a high success rate in recognising motor intention within a single training session. This demonstrates the intuitiveness of the BCI-FES protocol, making it potentially suitable for extended, patient-managed hand therapy. In conclusion, this thesis demonstrated that SCI patients are able to use a BCI system on their own or through help from their caregiver in a home environment. It also demonstrated that the NF treatment has a positive effect on the reduction of CNP on SCI patients. In addition, this thesis presents promising results of home-based BCI systems in the rehabilitation domain and presents the first step in developing and testing consumer-grade BCI systems for rehabilitation purposes.
200

Stain separation, cell classification and histochemical score in digital histopathology images

Liu, Jingxin January 2018 (has links)
This thesis focuses on developing new automatic techniques addressing three typical problems in digital histopathology image analysis, histochemical stain separation at pixel-level, cell classifications at region level, and histochemical score assessment at image level, with the aim of providing useful tools to help histopathologists in their decision making. First, we study a pixel-level problem, separating positive chemical stains. To realise the full potential of digital pathology, accurate and robust computer techniques for automatically detecting biomarkers play an important role. Traditional methods transform the colour histopathology images into a gray scale image and apply a single threshold to separate positively stained tissues from the background. In this thesis, we show that the colour distribution of the positive immunohistochemical stains varies with the level of luminance and that a single threshold will be impossible to separate positively stained tissues from other tissues, regardless how the colour pixels are transformed. Based on this observation, two novel luminance adaptive biomarker detection methods are proposed. The first, termed Luminance Adaptive Multi-Thresholding (LAMT) first separate the pixels according to their luminance levels and for each luminance level a separate threshold is found for detecting the positive stains. The second, termed Luminance Adaptive Random Forest (LARF) applies one of the most powerful machine learning models, random forest, as a base classifier to build an ensemble classifier for biomarker detection. Second, we study a cell-level problem, the cell classification task in pathology images. Two different classification models are proposed. The first model for HEp-2 cell pattern classification comes with a novel object-graph based feature, which decompose the binary image into primitive objects and represent them with a set of morphological feature. Work on cell classification is further extended using deep learning model termed Deep Autoencoding-Classification Network (DACN). The DACN model consists of an autoencoder and a conventional classification convolutional neural network (CNN) with the two sharing the same encoding pipeline. The DACN model is jointly optimized for the classification error and the image reconstruction error based on a multi-task learning procedure. We will present experiment results to show that the proposed DACN outperforms all known state-of-the-art on two public indirect immunofluorescence stained HEp-2 cell datasets and H\&E stained colorectal adenocarcinomas cell dataset. Third, we study an image-level problem, assessing the histochemical score of a histopathology image. To determine the molecular class of the tumour, pathologists will have to manually mark the nuclei activity biomarkers by assigning a histochemical score (H-Score) to each TMA core with a semi-quantitative assessment method. Manually marking positively stained nuclei is a time consuming, imprecise and subjective process which will lead to inter-observer and intra-observer discrepancies. In this thesis, we present an end-to-end deep learning system which directly predicts the H-Score automatically. Our system imitates the pathologists' decision process and uses one fully convolutional network (FCN) to extract all nuclei region, a second FCN to extract tumour nuclei region, and a multi-column convolutional neural network which takes the outputs of the first two FCNs and the stain intensity description image as input and acts as the decision making mechanism to directly output the H-Score of the input TMA image. To the best of our knowledge, this is the first end-to-end system that takes a TMA image as input and directly outputs a clinical score. We will present experimental results which demonstrate that the H-Scores predicted by our model have very high and statistically significant correlation with experienced pathologists' scores and that the H-Score discrepancy between our algorithm and the pathologists is on par with the inter-subject discrepancy between the pathologists.

Page generated in 0.0312 seconds