• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1288
  • 349
  • 214
  • 91
  • 65
  • 53
  • 40
  • 36
  • 27
  • 17
  • 14
  • 13
  • 13
  • 13
  • 7
  • Tagged with
  • 2666
  • 2666
  • 836
  • 820
  • 592
  • 571
  • 449
  • 410
  • 405
  • 333
  • 310
  • 284
  • 259
  • 248
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Development of Artificial Neural Networks Based Interpolation Techniques for the Modeling and Estimation of Radon Concentrations in Ohio

Akkala, Arjun 09 September 2010 (has links)
No description available.
472

Automated Detection and Analysis of Low Latitude Nightside Equatorial Plasma Bubbles

Adkins, Vincent James 21 June 2024 (has links)
Equatorial plasma bubbles (EPBs) are large structures consisting of depleted plasma that generally form on the nightside of Earth's ionosphere along magnetic field lines in the upper thermosphere/ionosphere. While referred to as `bubbles', EPBs tend to be longer along magnetic latitudes and narrower along magnetic longitudes which are on the order of thousands and hundreds of kilometers, respectively. EPBs are a well documented occurrence with observations spanning many decades. As such, much is known about their general behavior, seasonal variation of occurrences, increasing/decreasing occurrences with increasing/decreasing solar activity, and their ability to interact and interfere with radio waves such as GPS. This dissertation expands on this understanding by focusing on the detection and tracking of EPBs in the upper thermosphere/ionosphere along equatorial to low latitudes. To do this, far ultraviolet (FUV) emission observations of the recombination of O$^+$ with electrons via the Global-Scale Observations of the Limb and Disk (GOLD) mission are analyzed. GOLD provides consistent data from geostationary orbit with the eastern region of the Americas, Atlantic, and western Africa. The optical data can be used to pick out gradients in brightness along the 135.6 nm wavelength which correlate with the location of EPBs in the nightside ionosphere. The dissertation provides a novel method to look at and analyze 2-dimensional data with inconsistent time-steps for EPB detection and tracking. During development, preprocessing of large scale (multiple years) data proved to be the largest time sync. To that end, this dissertation tests the possibility of using convolution neural networks for detection of EPBs with the end goal of reducing the amount of preprocessing necessary. Further, data from the Ionospheric Connection Explorer's (ICON's) ion velocity meter (IVM) are compared to EPBs detected via GOLD to understand how the ambient plasma around the EPBs behave. Along with the ambient plasma, zonal and meridional thermospheric winds observed by ICON's Michelson Interferometer for Global High-resolution Thermospheric Imaging (MIGHTI) instrument are analyzed in conjunction with the same EPBs to understand how winds coincident with EPBs behave. An analysis of winds before EPBs form is also done to observe the potential for both zonal and meridional winds' ability to suppress and amplify EPB formation. / Doctor of Philosophy / Equatorial plasma bubbles (EPBs) are large structures that generally form during post- sunset along Earth's magnetic equator. While referred to as `bubbles', EPBs tend to be thousands of kilometers from north to south and hundreds of kilometers from east to west and well over a thousands kilometers in altitude. EPBs are a well documented occurrence with observations spanning many decades. This includes their ability to interfere with radar and GPS. This dissertation expands on the scientific community's understanding by focusing on the detection and tracking of EPBs along the magnetic equator. To do this, observations from the NASA Global-Scale Observations of the Limb and Disk (GOLD) mission are analyzed. GOLD provides consistent observations looking over the eastern region of the Americas, Atlantic, and western Africa. A unique method to look at and analyze this data for EPB detection and tracking is developed. This dissertation also tests the possibility of using machine learning for detection of EPBs. Further, data from the NASA Ionospheric Connection Explorer (ICON) mission is compared to EPBs detected via GOLD to understand how the behavior of the upper atmosphere and the conductive region therein, known as the ionosphere, interact with the EBPs themselves.
473

A combined soft computing-mechanics approach to damage evaluation and detection in reinforced concrete beams

Al-Rahmani, Ahmed Hamid Abdulrahman January 1900 (has links)
Master of Science / Department of Civil Engineering / Hayder A. Rasheed / Damage detection and structural health monitoring are topics that have been receiving increased attention from researchers around the world. A structure can accumulate damage during its service life, which in turn can impair the structure’s safety. Currently, visual inspection is performed by experienced personnel in order to evaluate damage in structures. This approach is affected by the constraints of time and availability of qualified personnel. This study aims to facilitate damage evaluation and detection in concrete bridge girders without the need for visual inspection while minimizing field measurements. Simply-supported beams with different geometric, material and cracking parameters (cracks’ depth, width and location) were modeled in three phases using Abaqus finite element analysis software in order to obtain stiffness values at specified nodes. In the first two phases, beams were modeled using beam elements. Phase I included beams with a single crack, while phase II included beams with up to two cracks. For phase III, beams with a single crack were modeled using plane stress elements. The resulting damage databases from the three phases were then used to train two types of Artificial Neural Networks (ANNs). The first network type (ANNf) solves the forward problem of providing a health index parameter based on the predicted stiffness values. The second network type (ANNi) solves the inverse problem of predicting the most probable cracking pattern, where a unique analytical solution is not attainable. In phase I, beams with 3, 5, 7 and 9 stiffness nodes and a single crack were modeled. For the forward problem, ANNIf had the geometric, material and cracking parameters as inputs and stiffness values as outputs. This network provided excellent prediction accuracy measures (R2 > 99%). For the inverse problem, ANNIi had the geometric and material parameters as well as stiffness values as inputs and the cracking parameters as outputs. Better prediction accuracy measures were achieved when more stiffness nodes were utilized in the ANN modeling process. It was also observed that decreasing the number of required outputs immensely improved the quality of predictions provided by the ANN. This network provided less accurate predictions (R2 = 68%) compared to ANNIf, however, ANNIi still provided reasonable results, considering the non-uniqueness of this problem’s solution. In phase II, beams with 9 stiffness nodes and two cracks were modeled following the same procedure. ANNIIf provided excellent results (R2 > 99%) while ANNIIi had less accurate (R2 = 65%) but still reasonable predictions. Finally, in phase III, simple span beams with 3, 5, 7 and 9 stiffness nodes and a single crack were modeled using plane stress elements. ANNIIIf (R2 > 99%) provided excellent results while ANNIIIi had less accurate (R2 = 65%) but still reasonable predictions. Predictions in this phase were very accurate for the crack depth and location parameters (R2 = 97% and 99%, respectively). Further inspection showed that ANNIIIi provided more accurate predictions when compared with ANNIi. Overall, the obtained results were reasonable and showed good agreement with the actual values. This indicates that using ANNs is an excellent approach to damage evaluation, and a viable approach to obtain the, analytically unattainable, solution of the inverse damage detection problem.
474

Online Deception Detection Using BDI Agents

Merritts, Richard Alan 01 January 2013 (has links)
This research has two facets within separate research areas. The research area of Belief, Desire and Intention (BDI) agent capability development was extended. Deception detection research has been advanced with the development of automation using BDI agents. BDI agents performed tasks automatically and autonomously. This study used these characteristics to automate deception detection with limited intervention of human users. This was a useful research area resulting in a capability general enough to have practical application by private individuals, investigators, organizations and others. The need for this research is grounded in the fact that humans are not very effective at detecting deception whether in written or spoken form. This research extends the deception detection capability research in that typical deception detection tools are labor intensive and require extraction of the text in question following ingestion into a deception detection tool. A neural network capability module was incorporated to lend the resulting prototype Machine Learning attributes. The prototype developed as a result of this research was able to classify online data as either "deceptive" or "not deceptive" with 85% accuracy. The false discovery rate for "deceptive" online data entries was 20% while the false discovery rate for "not deceptive" was 10%. The system showed stability during test runs. No computer crashes or other anomalous system behavior were observed during the testing phase. The prototype successfully interacted with an online data communications server database and processed data using Neural Network input vector generation algorithms within seconds
475

Predicting corporate credit ratings using neural network models

Frank, Simon James 12 1900 (has links)
Thesis (MBA (Business Management))--University of Stellenbosch, 2009. / ENGLISH ABSTRACT: For many organisations who wish to sell their debt, or investors who are looking to invest in an organisation, company credit ratings are an important surrogate measure for the marketability or risk associated with a particular issue. Credit ratings are issued by a limited number of authorised companies – with the predominant being Standard & Poor’s, Moody’s and Fitch – who have the necessary experience, skills and motive to calculate an objective credit rating. In the wake of some high profile bankruptcies, there has been recent conjecture about the accuracy and reliability of current ratings. Issues relating specifically to the lack of competition in the rating market have been identified as possible causes of the poor timeliness of rating updates. Furthermore, the cost of obtaining (or updating) a rating from one of the predominant agencies has also been identified as a contributing factor. The high costs can lead to a conflict of interest where rating agencies are obliged to issue more favourable ratings to ensure continued patronage. Based on these issues, there is sufficient motive to create more cost effective alternatives to predicting corporate credit ratings. It is not the intention of these alternatives to replace the relevancy of existing rating agencies, but rather to make the information more accessible, increase competition, and hold the agencies more accountable for their ratings through better transparency. The alternative method investigated in this report is the use of a backpropagation artificial neural network to predict corporate credit ratings for companies in the manufacturing sector of the United States of America. Past research has shown that backpropagation neural networks are effective machine learning techniques for predicting credit ratings because no prior subjective or expert knowledge, or assumptions on model structure, are required to create a representative model. For the purposes of this study only public information and data is used to develop a cost effective and accessible model. The basis of the research is the assumption that all information (both quantitive and qualitative) that is required to calculate a credit rating for a company, is contained within financial data from income statements, balance sheets and cash flow statements. The premise of the assumption is that any qualitative or subjective assessment about company creditworthiness will ultimately be reflected through financial performance. The results show that a backpropagation neural network, using 10 input variables on a data set of 153 companies, can classify 75% of the ratings accurately. The results also show that including collinear inputs to the model can affect the classification accuracy and prediction variance of the model. It is also shown that latent projection techniques, such as partial least squares, can be used to reduce the dimensionality of the model without making any assumption about data relevancy. The output of these models, however, does not improve the classification accuracy achieved using selected un-correlated inputs. / AFRIKAANSE OPSOMMING: Vir baie organisasies wat skuldbriewe wil verkoop, of beleggers wat in ʼn onderneming wil belê is ʼn maatskappy kredietgradering ’n belangrike plaasvervangende maatstaf vir die bemarkbaarheid van, of die risiko geassosieer met ʼn betrokke uitgifte. Kredietgraderings word deur ʼn beperkte aantal gekeurde maatskappye uitgereik – met die belangrikste synde Standard & Poor’s, Moody’s en Fitch. Hulle het almal die nodige ervaring, kundigheid en rede om objektiewe kredietgraderings te bereken. In die nadraai van ʼn aantal hoë profiel bankrotskappe was daar onlangs gissings oor die akkuraatheid en betroubaarheid van huidige graderings. Kwessies wat spesifiek verband hou met die gebrek aan kompetisie in die graderingsmark is geïdentifiseer as ‘n moontlike oorsaak vir die swak tydigheid van gradering opdatering. Verder word die koste om ‘n gradering (of opdatering van gradering) van een van die dominante agentskappe te bekom ook geïdentifiseer as ʼn verdere bydraende faktor gesien. Die hoë koste kan tot ‘n belange konflik lei as graderingsagentskappe onder druk kom om gunstige graderings uit te reik om sodoende volhoubare klante te behou. As gevolg van hierdie kwessies is daar voldoende motivering om meer koste doeltreffende alternatiewe vir die skatting van korporatiewe kredietgraderings te ondersoek. Dit is nie die doelwit van hierdie alternatiewe om die toepaslikheid van bestaande graderingsagentskappe te vervang nie, maar eerder om die inligting meer toeganklik te maak, mededinging te verhoog en om die agentskappe meer toerekenbaar vir hul graderings te maak deur beter deursigtigheid. Die alternatiewe manier wat in hierdie verslag ondersoek word, is die gebruik van ‘n kunsmatige neurale netwerk om die kredietgraderings van vervaardigingsmaatskappye in die VSA te skat. Vorige navorsing het getoon dat neurale netwerke doeltreffende masjienleer tegnieke is om kredietgraderings te skat omdat geen voorafkennis of gesaghebbende kundigheid, of aannames oor die modelstruktuur nodig is om ‘n verteenwoordigende model te bou. Vir doeleindes van hierdie navorsingsverslag word slegs openbare inligting en data gebruik om ʼn kostedoeltreffende en toeganklike model te bou. Die grondslag van hierdie navorsing is die aanname dat alle inligting (beide kwantitatief en kwalitatief) wat benodig word om ʼn kredietgradering vir ʼn onderneming te bereken, opgesluit is in die finansiële data in die inkomstestate, balansstate en kontantvloei state. Die aanname is dus dat alle kwalitatiewe of subjektiewe assessering oor ‘n maatskappy se kredietwaardigheid uiteindelik in die finansiële prestasie sal reflekteer. Die resultate toon dat ʼn neurale netwerk met 10 toevoer veranderlikes op ‘n datastel van 153 maatskappye 75% van die graderings akkuraat klassifiseer. Die resultate toon ook dat die insluiting van kollineêre toevoere tot die model die klassifikasie akkuraatheid en die variansie van die skatting kan beïnvloed. Daar word verder getoon dat latente projeksietegnieke, soos parsiële kleinste kwadrate, die dimensies van die model kan verminder sonder om enige aannames oor data toepaslikheid te maak. Die afvoer van hierdie modelle verhoog egter nie die klassifikasie akkuraatheid wat behaal is met die gekose ongekorreleerde toevoere nie. 121 pages.
476

An artificially-intelligent biomeasurement system for total hip arthroplasty patient rehabilitation

Law, Ewan James January 2012 (has links)
This study concerned the development and validation of a hardware and software biomeasurement system, which was designed to be used by physiotherapists, general practitioners and other healthcare professionals. The purpose of the system is to detect and assess gait deviation in the form of reduced post-operative range of movement (ROM) of the replacement hip joint in total hip arthroplasty (THA) patients. In so doing, the following original work is presented: Production of a wearable, microcontroller-equipped system which was able to wirelessly relay accelerometer sensor data of the subject’s key hip-position parameters to a host computer, which logs the data for later analysis. Development of an artificial neural network is also reported, which was produced to process the sensor data and output assessment of the subject’s hip ROM in the flexion/extension and abduction/adduction rotations (forward and backward swing and outward and inward movement of the hip respectively). The review of literature in the area of biomeasurement devices is also presented. A major data collection was carried out using twenty-one THA patients, where the device output was compared to the output of a Vicon motion analysis system which is considered the ‘gold standard’ in clinical gait analysis. The Vicon system was used to show that the device developed did not itself affect the patient’s hip, knee or ankle gait cycle parameters when in use, and produced measurement of hip flexion/extension and abduction/adduction closely approximating those of the Vicon system. In patients who had gait deviations manifesting in reduced ROM of these hip parameters, it was demonstrated that the device was able to detect and assess the severity of these excursions accurately. The results of the study substantiate that the system developed could be used as an aid for healthcare professionals in the following ways: · To objectively assess gait deviation in the form of reduced flexion/extension and abduction/adduction in the human hip, after replacement, · Monitoring of patient hip ROM post-operatively · Assist in the planning of gait rehabilitation strategies related to these hip parameters.
477

Novel control of a high performance rotary wood planing machine

Chamberlain, Matthew January 2013 (has links)
Rotary planing, and moulding, machining operations have been employed within the woodworking industry for a number of years. Due to the rotational nature of the machining process, cuttermarks, in the form of waves, are created on the machined timber surface. It is the nature of these cuttermarks that determine the surface quality of the machined timber. It has been established that cutting tool inaccuracies and vibrations are a prime factor in the form of the cuttermarks on the timber surface. A principal aim of this thesis is to create a control architecture that is suitable for the adaptive operation of a wood planing machine in order to improve the surface quality of the machined timber. In order to improve the surface quality, a thorough understanding of the principals of wood planing is required. These principals are stated within this thesis and the ability to manipulate the rotary wood planing process, in order to achieve a higher surface quality, is shown. An existing test rig facility is utilised within this thesis, however upgrades to facilitate higher cutting and feed speeds, as well as possible future implementations such as extended cutting regimes, the test rig has been modified and enlarged. This test rig allows for the dynamic positioning of the centre of rotation of the cutterhead during a cutting operation through the use of piezo electric actuators, with a displacement range of ±15μm. A new controller for the system has been generated. Within this controller are a number of tuneable parameters. It was found that these parameters were dependant on a high number external factors, such as operating speeds and run‐out of the cutting knives. A novel approach to the generation of these parameters has been developed and implemented within the overall system. Both cutterhead inaccuracies and vibrations can be overcome, to some degree, by the vertical displacement of the cutterhead. However a crucial information element is not known, the particular displacement profile. Therefore a novel approach, consisting of a subtle change to the displacement profile and then a pattern matching approach, has been implemented onto the test rig. Within the pattern matching approach the surface profiles are simplified to a basic form. This basic form allows for a much simplified approach to the pattern matching whilst producing a result suitable for the subtle change approach. In order to compress the data levels a Principal Component Analysis was performed on the measured surface data. Patterns were found to be present in the resultant data matrix and so investigations into defect classification techniques have been carried out using both K‐Nearest Neighbour techniques and Neural Networks. The application of these novel approaches has yielded a higher system performance, for no additional cost to the mechanical components of the wood planing machine, both in terms of wood throughput and machined timber surface quality.
478

A New Islanding Detection Method Based On Wavelet-transform and ANN for Inverter Assisted Distributed Generator

Guan, Zhengyuan 01 January 2015 (has links)
Nowadays islanding has become a big issue with the increasing use of distributed generators in power system. In order to effectively detect islanding after DG disconnects from main source, author first studied two passive islanding methods in this thesis: THD&VU method and wavelet-transform method. Compared with other passive methods, each of them has small non-detection zone, but both of them are based on the threshold limit, which is very hard to set. What’s more, when these two methods were applied to practical signals distorted with noise, they performed worse than anticipated. Thus, a new composite intelligent based method is presented in this thesis to solve the drawbacks above. The proposed method first uses wavelet-transform to detect the occurrence of events (including islanding and non-islanding) due to its sensitivity of sudden change. Then this approach utilizes artificial neural network (ANN) to classify islanding and non-islanding events. In this process, three features based on THD&VU are extracted as the input of ANN classifier. The performance of proposed method was tested on two typical distribution networks. The obtained results of two cases indicated the developed method can effectively detect islanding with low misclassification.
479

Study of WW decay of a Higgs boson with the ALEPH and CMS detectors

Delaere, Christophe 06 July 2005 (has links)
The Standard Model is a mathematical description of the very nature of elementary particles and their interactions, now seen as relativistic quantum fields. A key feature of the theory is the Brout-Englert-Higgs mechanism, responsible for the spontaneous symmetry breaking of the underlying gauge symmetry, and which implies the existence of a neutral Higgs particle. Searches for the Higgs boson were conducted at the Large Electron Positron collider until 2000 and are still ongoing at the Tevatron collider, but the particle has not been not observed. In order to better constrain models with an exotic electroweak symmetry breaking sector, a search for a Higgs boson decaying into a W pair is carried out with the ALEPH detector on 453 pb-1 of data collected at center-of-mass energies up to 209 GeV. The analysis is optimized for the many topologies resulting from the six-fermion final state. A lower limit at 105.8 GeV/c² on the Higgs boson mass in a fermiophobic Higgs boson scenario is obtained. The ultimate machine for the Higgs boson discovery is the Large Hadron Collider, which is being built at CERN. In order to evaluate the physics potential of the CMS detector, the WH associated production of a Higgs boson decaying into a W pair is studied. Performances of data acquisition and its sophisticated trigger system, particle identification and event reconstruction are investigated by performing a detailed analysis on simulated data. Three-lepton final states are shown to provide interesting possibilities. For an integrated luminosity of 100 fb-1, a potential signal significance of more than 5ó is obtained in the mass interval between 155 and 178 GeV/c². The corresponding precision on the Higgs boson mass and partial decay width into W pairs are evaluated. This channel also provides one of the very few possible avenues towards the discovery of a fermiophobic Higgs boson below 180 GeV/c². These studies required many original technical developments, that are also presented.
480

Development and Implementation of an Online Kraft Black Liquor Viscosity Soft Sensor

Alabi, Sunday Boladale January 2010 (has links)
The recovery and recycling of the spent chemicals from the kraft pulping process are economically and environmentally essential in an integrated kraft pulp and paper mill. The recovery process can be optimised by firing high-solids black liquor in the recovery boiler. Unfortunately, due to a corresponding increase in the liquor viscosity, in many mills, black liquor is fired at reduced solids concentration to avoid possible rheological problems. Online measurement, monitoring and control of the liquor viscosity are deemed essential for the recovery boiler optimization. However, in most mills, including those in New Zealand, black liquor viscosity is not routinely measured. Four batches of black liquors having solids concentrations ranging between 47 % and 70 % and different residual alkali (RA) contents were obtained from Carter Holt Harvey Pulp and Paper (CHHP&P), Kinleith mill, New Zealand. Weak black liquor samples were obtained by diluting the concentrated samples with deionised water. The viscosities of the samples at solids concentrations ranging from 0 to 70 % were measured using open-cup rotational viscometers at temperatures ranging from 0 to 115 oC and shear rates between 10 and 2000 s-1. The effect of post-pulping process, liquor heat treatment (LHT) on the liquors’ viscosities was investigated in an autoclave at a temperature >=180 oC for at least 15 mins. The samples exhibit both Newtonian and non-Newtonian behaviours depending on temperature and solids concentration; the onsets of these behaviours are liquor-dependent. In conformity with the literature data, at high solids concentrations (> 50 %) and low temperatures, they exhibit shear-thinning behaviour with or without thixotropy but the shear-thinning/thixotropic characteristics disappear at high temperatures (>= 80 oC). Generally, when the apparent viscosities of the liquors are <= ~1000 cP, the liquors show a Newtonian or a near-Newtonian behaviour. These findings demonstrate that New Zealand black liquors can be safely treated as Newtonian fluids under industrial conditions. Further observations show that at low solids concentrations (< 50 %), viscosity is fairly independent of the RA content; however at solids concentrations > 50 %, viscosity decreases with increasing RA content of the liquor. This shows that the RA content of black liquor can be manipulated to control the viscosity of high-solids black liquors. The LHT process had negligible effect on the low-solids liquor viscosity but led to a significant and permanent reduction of the high-solids liquor viscosity by a factor of at least 6. Therefore, the incorporation of a LHT process into an existing kraft recovery process can help to obtain the benefits of high-solids liquor firing without a concern for the attending rheological problems. A variety of the existing and proposed viscosity models using the traditional regression modelling tools and an artificial neural network (ANN) paradigm were obtained under different constraints. Hitherto, the existing models rely on the traditional regression tools and they were mostly applicable to limited ranges of process conditions. On the one hand, composition-dependent models were obtained as a direct function of solids concentration and temperature, or solids concentration, temperature and shear rate; the relationships between these variables and the liquor viscosity are straight forward. The ANN-based models developed in this work were found to be superior to the traditional models in terms of accuracy, generalization capability and their applicability to a wide range of process conditions. If the parameters of the resulting ANN models can be successfully correlated with the liquor composition, the models would be suitable for online application. Unfortunately, black liquor viscosity depends on its composition in a complex manner; the direct correlation of its model parameters with the liquor composition is not yet a straight forward issue. On the other hand, for the first time in the Australasia, the limitations of the composition-dependent models were addressed using centrifugal pump performance parameters, which are easy to measure online. A variety of centrifugal pump-based models were developed based on the estimated data obtained via the Hydraulic Institute viscosity correction method. This is opposed to the traditional approaches, which depend largely on actual experimental data that could be difficult and expensive to obtain. The resulting age-independent centrifugal pump-based model was implemented online as a black liquor viscosity soft sensor at the number 5 recovery boiler at the CHHP&P, Kinleith mill, New Zealand where its performance was evaluated. The results confirm its ability to effectively account for variations in the liquor composition. Furthermore, it was able to give robust viscosity estimates in the presence of the changing pump’s operating point. Therefore, it is concluded that this study opens a new and an effective way for kraft black liquor viscosity sensor development.

Page generated in 0.0481 seconds