• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 322
  • 117
  • 90
  • 45
  • 26
  • 22
  • 19
  • 2
  • Tagged with
  • 2106
  • 205
  • 146
  • 139
  • 120
  • 119
  • 118
  • 116
  • 116
  • 115
  • 111
  • 105
  • 94
  • 94
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Correlation between machining monitoring signals, cutting tool wear and surface integrity on high strength titanium alloys

Ray, Nathan January 2018 (has links)
It is widely accepted that tool wear has a direct impact on a machining process, playing a key part in surface integrity, part quality, and therefore, process efficiency. By establishing the state of a tool during a machining process, it should be possible to estimate both the surface properties and the optimal process parameters, while allowing intelligent predictions about the future state of the process to be made; thus ultimately reducing unexpected component damage. This thesis intends to address the problem of tool wear prediction during machining where wear rates vary between components; for instance, due to the relatively large size of the component forging and, therefore, inherent material variations when compared to existing research. In this case, the industrial partner, Safran Landing Systems, is interested in the ability to predict tool wear during the finish milling of large, curved, titanium components, despite differing material properties and, therefore, tool wear rates. This thesis is split into four key parts, the first of which describes in detail the formulation and implementation of an experimental procedure, intended to provide a working set of industrially representative monitoring data that can be used throughout the remainder of the thesis. This part includes development of a relevant machining strategy, material specimen extraction, sensor selection and placement, and 3D tool geometry measurement, all of which have been completed at industrial partners facilities. It finishes with a preliminary investigation into the data collected during the machining process from the tools, material specimens, and sensors placed in close proximity to the cutting zone. The second, third, and fourth parts follow logically from one-another, beginning with a state classification problem, and ending with a full dynamic model prediction of wear during the machining of large landing gear components; this method, however, is applicable to many other machining scenarios using the new technique applied in this thesis. The state classification chapter is a necessary first step in developing a predictive model as is aims to prove the data is indeed separable based upon the generating wear state. Once confirmed, given the sequential nature of tool wear, the order of observations can be included in the modelling, in an attempt to improve classification accuracy. This forms the basis of the state tracking chapter, and leads naturally into the full dynamic model prediction in the final part. This is a promising result for the machining community, as process monitoring often relies on operator expertise to detect wear rate fluctuations and, in turn, results in over-conservative tool usage limits, adding time and expense to many complex machining processes. It also presents the opportunity to predict part quality through pre-existing relationships between the acquired signals and material surface finish - correlations which are explored and presented as part of this thesis. The solution to predicting a varying wear rate within a harsh machining environment introduced in this thesis is based around the application of a Gaussian process (GP) NARX (Nonlinear Auto-Regressive with eXogenous inputs) model borrowed from the machine learning prediction and, more recently, structural health monitoring (SHM) communities. The GP-NARX approach is found to be well suited to the application of wear prediction during machining, and forms a promising contribution to the development of autonomous manufacturing processes.
532

Ultrasonic joining of metal-polymer surfaces

Al-Obaidi, Anwer J. January 2018 (has links)
The joining of dissimilar materials is becoming increasingly important, especially for structural applications, and in transportation industries to reduce the weight and thus decrease fuel consumption and Co2 emissions. Joining lightweight materials (metals and polymers) is commonly performed using mechanical fastenings, such as screws, bolts, and rivets, or adhesion techniques. However, disadvantages of such mechanical methods are considerable stress concentration around the fastener hole, the potential for corrosion problems, and the possibility of fatigue cracking in metallic materials. Ultrasonic joining (USJ) is particularly suitable where rapid processing and good process reliability are demanded. Quality, strength, and energy saving capabilities also characterise ultrasonic joining. A relatively good body of work exists for polymer-polymer and metal-metal USJ, but little research has been conducted into the joining of dissimilar materials. This is therefore the focus of this thesis. The amorphous thermoplastic polymer (ABS 750SW) and the aluminium alloy (Al6082-T6) are common engineering materials for the manufacturing of hybrid structure and components for engineering applications. These light-weight materials' applications in the transportation industries include both decorative and structural parts, such as internal and external panels, and bumpers. Additionally, metal - polymer laminates are a much more desirable and versatile option, when the replacement of a full metal construction is required. This work presents a comprehensive study of the novel joining of these two materials through USJ and investigates the effect of joining parameters on the joint strength. The joints have been bonded without using any additional materials (fillers).
533

Influence of additives on agglomeration behaviours/formation in a laboratory-scale fixed bed combustion of biomass fuels

Akindele, Ojo David January 2018 (has links)
This research has focussed on the impact of kaolin as additive on the agglomeration behaviours of willow, white wood, and miscanthus during their combustion processes in a laboratory-scale fixed bed whereby, Gooch ceramic crucible was used as the combustion chamber. It aimed at reducing agglomeration during the combustion of these selected problematic biomass fuels. Biomass fuels are CO2 neutral and very rich in alkali metals especially potassium, K and sodium, Na with potassium displaying the predominant roles in the agglomeration formation of these selected biomass fuels. During the combustion processes, agglomerates were formed in the combustion chamber at 750 0 C and 802 0 C under the atmospheric pressure. This was attributed to the formation of eutectic compounds in the form of alkali-silicates (K-silicates or Na- silicates). The eutectic compound has a lower melting temperature than the melting temperature of either the alkali metals or the silica from sand, which is the bed material. It therefore melts abruptly in the bed and formed lumps in form of agglomerates. Energy Dispersive X-ray spectroscopy (EDX) carried out on the agglomerates indicated that, the interior of the agglomerates was dominated with silicon, Si from the sand while the exterior or the peripheries were preponderated with alkali metals potassium K, and sodium Na from the biomass fuels ash. Other trace elements present in the agglomerates as confirmed by EDX analyses are; Aluminium Al, Calcium Ca, Chlorine Cl, Iron Fe, Phosphorus P, and Magnesium Mg. Meanwhile, with the addition of additive (kaolin) Al2 Si2 O5 (OH)4 to the bed materials and the combustion processes repeated under the same operating conditions, no agglomerate was formed at 750 0 C and 802 0 C. However, eutectic compound in the form of alumina-alkali-silicate was formed at a higher melting temperature than the alkali from the biomass fuels and the silica from bed materials, therefore no agglomerate was formed at these temperatures (750 0 C and 802 0 C). Factsage software was extensively utilized to predict the eutectic points (eutectic temperatures) on both the binary and the ternary phase diagrams. With the inclusion of additive (kaolin) in the bed materials, on the binary phase diagrams, agglomeration was predicted to occur in the combustion bed at 1200 0C if the biomass fuel is dominated by potassium, K. Consequently, if the biomass fuel is dominated by sodium, Na, agglomeration was predicted to occur at 1700 0 C in the combustion bed. However, on the ternary phase diagrams, with the addition of kaolin to the bed materials, initial agglomeration was predicted to occur at 1550 0 C if the biomass fuel is dominated by potassium, K but rose to 1700 0 C if the biomass fuel is dominated by sodium, Na. This justifies the affirmation that, Sodium, Na has a higher melting temperature than potassium, K. Elongation in the biomass particle size from < 1mm diameter before combustion to 7mm diameter in the agglomerates formed from the combustion of willow, and 10mm diameter in the agglomerates produced from the combustion of both the miscanthus and white wood is a clear manifestation that, agglomeration actually occurred in the bed. Post combustion analyses; Scanning Electron Microscopy and Energy Dispersive X-ray Spectroscopy (SEM and EDX) carried out on the agglomerate samples also confirmed that, agglomeration took place in the bed. Huge agglomerates were formed at a lower melting temperature of 350 0C when potassium hydroxide, KOH and silica sand were heated directly (reality test). Harder and tougher agglomerates were produced at 502 0C. This confirmed that, agglomerates were produced from the formation of a low temperature alkali-silicate in the form of K-silicate. The results of this research have indicated that, Gooch ceramic crucible is a reliable combustion chamber for the combustion of biomass fuels experiments/tests in a laboratory –scale fixed bed. It accommodated more heat distribution into the combustion chamber than the conventional ceramic crucible. Moreover, kaolin was also confirmed as an additive capable of reducing agglomeration during the combustion of biomass fuels in a laboratory - scale fixed bed and other combustion beds.
534

Water and thermal management of PEM fuel cells

Raja Arif, Raja Muhammad Aslam January 2018 (has links)
Proton Exchange Membrane (PEM) fuel cells have a great potential to replace conventional fossil fuel dependent power conversion technologies in a wide range of portable, automotive and stationary applications and this is due to their high efficiency, quick start-up and sizing flexibility. However, there are still some technical challenges that hinder the widespread deployment of this clean technology into the marketplace. Two of the key challenges are the water management and thermal management of the fuel cell; any mismanagement of water and/or heat could lead to water flooding or membrane dry-out which are both detrimental to the fuel cell performance and durability. In order to have insights on how to manage water and heat within the fuel cells, a transparent and commercially available PEM fuel cell has been directly visualised using high-resolution digital and thermal cameras at both sides of the fuel cell. With this technique, real-time videos that show how liquid water and heat evolve have been recorded. There has been a particular emphasis on how liquid water forms, accumulates and moves along the flow channels. Further, the sensitivity of the distribution of liquid water and temperature within the fuel cell to the operating conditions has been investigated. For this investigation, a new parameter, termed as the wetted bend area ratio, has been introduced to give an indication on how flooded the flow channels are and subsequently explain the variations in the performance of the PEM fuel cell as the operating conditions change. The main findings are the temperature distribution across the MEA becomes less uniform as the wetted ratio number decreases. Further, the temperature distribution along the MEA at the cathode side becomes less uniform as the air flow rate increases. In addition, there exist optimum values for the operating conditions to increase the fuel cell performance. Since the operation of the PEM fuel cell at high temperatures (i.e. > 100°C) is an increasingly adopted way to resolve water flooding problems, the reliability of the currently used components remain questionable. To partly answer this question, the gas permeability of the diffusion media used in PEM fuel cells has been investigated under higher temperatures for the first time. The results show that the gas permeability increases as the operating temperature increases and this may enhance the reactant transport within the PEM fuel cell.
535

A study on the validity of the Lattice Boltzmann Method as a way to simulate high Reynolds number flows past porous obstacles

Sangtani, Navin January 2018 (has links)
With ever growing levels of urbanisation across the globe, a good understanding of canopy flows is paramount to reduce pollution in major cities and prevent unwanted aerodynamic loading on structures. The multi-scale nature of not only urban construction but that of natural environments requires a more complex modelling system be employed. Fractal geometries have only recently been investigated in turbulent flows, their multi-scale properties make them the logical choice for modelling and simulating flows involving such complex geometries. Additionally, in recent years the usage of Lattice Boltzmann Methods (LBM) vs Computational Fluid Dynamics (CFD) has increased, since LBM offers better computational efficiency and speed over CFD. However, the shortcomings of LBM still need to be benchmarked since macroscopic quantities of the flow are extracted using a probabilistic model of the flow at microscopic scales. A plan to investigate turbulent flows over a fractal and non-fractal obstacles has been presented by implementing a LBM numerical analysis over a range of Reynolds numbers (100-49410). The suitability of LBM's multiple dynamics models including: Bhatnagar Gross Krook (BGK), Multiple Relaxation Time (MRT) and Regularised Lattice Boltzmann (RLB) have been studied for high reynolds number cases. Results from LBM cases were compared to available experimental data and published literature, although, results of fractal cases were not mesh independent compelling agreement between all three tested obstacles show a significant validation of LBM as tool to investigate high Reynolds number flows.
536

Measuring friction at an interface using nonlinear ultrasonic response

Li, Xiangwei January 2018 (has links)
Contacts of rough surfaces are present in almost all machines and mechanical components. Friction at the rough interface cause energy dissipation, wear and damage of surfaces. Engineers are interested in knowing the frictional conditions at contact interfaces. Despite friction being such a fundamental phenomenon, it is surprisingly difficult to measure reliably as results depend on the test method measurement environment. Methods have been developed to measure the friction and sliding contact tribometers are devised mostly in a laboratory environment. Their applications in measuring friction in-situ in a real contact is a challenge. Therefore, the aim of this research is to develop an ultrasonic method to measure friction and friction coefficient in-situ in a contact interface. Ultrasonic methods developed for non-destructive testing have been used to measure tribological parameters, such as oil film thickness, viscosity and pressure, in-situ bearings and machines. In conventional ultrasonic techniques, pulses are low power and when they strike an interface they do not result in a change in the contact state. The process is linear and elastic. However, high power sound waves can cause opening or closing of an interface, or interfacial slip; this is non-linear. Recently Contact Acoustic Nonlinearity (CAN) has drawn interest due to its potential in the non-destructive evaluation. When high power bulk shear ultrasound propagates through a compressed rough contact interface, higher order frequency components, higher odd order harmonics (3f, 5f, etc.) are generated in both transmitted and reflected waves. The nonlinear nature of the stick-slip phenomenon in friction may be the source of nonlinearity. In this study, nonlinearity due to the interaction of a shear ultrasonic wave with a frictional interface has been initially investigated numerically. A one-dimensional numerical model has been employed to understand contact nonlinearity generation and its dependence on incident ultrasonic amplitude, contact pressure and friction coefficient. The third harmonic increases and then decreases when contact stress rises, which suggests that nonlinearity generation due to the 'stick-slip' motion occurs at low contact stress and is restricted at high contact pressure. Harmonic generation at the contact was secondly investigated experimentally using a high frequency nonlinear ultrasonic technique. Methods were developed to separate the contact nonlinearity from the measured ultrasonic nonlinearity. Contact nonlinearity originating from a rough interface are assessed under various test conditions. Experimental measurement shows good agreement with the numerically computed nonlinearity. Two strategies were developed to estimate the friction coefficient using experimentally measured contact nonlinearity in conjunction with the numerical computation. The ultrasonically measured friction coefficient agrees reasonably with the sliding test results and published data. Using the contact nonlinearity, the ultrasonic method shows the usefulness in measuring the friction coefficient in-situ in a contact interface.
537

On nonstationarity from operational and environmental effects in structural health monitoring bridge data

Iakovidis, Iason January 2018 (has links)
Structural Health Monitoring (SHM) describes a set of activities that can be followed in order to collect data from an existent structure, generate data-based information about its current condition, identify the presence of any signs of abnormality and forecast its future response. These activities, among others, include instrumentation, data acquisition, processing, generation of diagnostic tools, as well as transmission of information to engineers, owners and authorities. SHM and, more specifically, continuous monitoring can provide numerous measures, which can be generally classified into three categories; vibrational-based, which includes natural frequencies, modeshapes, damping ratios, component-based, such as strains, tensions, deflections and environmental and operational variations (EOVs), associated with temperature, wind, traffic humidity and others. One of the main technical problems that SHM has to tackle is that of data normalisation. In abstract terms, this describes the impact that EOVs can have on SHM measures. In many cases, with interest placed on bridges, it has been observed that EOVs introduce nonstationary to SHM signals that can mask the variability that can be associated with the presence of damage; making damage detection attempts difficult. Hence, it is desirable to quantify the impacts of EOVs on damage sensitive features, project them out, using methods such as the cointegration, Principal Component Analysis (PCA) or others, in order to achieve a stationary signal. This type of signal can be assessed over time using tools, such as statistical process control (SPC) charts, to identify the existence of novelty, which can be linked with damage. As one can understand from the latter, it is important to detect the presence of nonstationary in SHM signals and identify its sources. However, this is not a straight-forward procedure and one important question that need to be answered is; how one can judge if a signal is stationary or not. Inside this work, this question is discussed, focusing on the definition of weak stationarity and under which assumption this judgement holds. In particular, the data coming from SHM are finite samples. Therefore, the mean and variance of a signal can be tracked, using a sequence of moving windows, something that needs a prior determination of the width of window. However, the major concern here is that the SHM signals can be characterised as periodically-correlated or cyclostationary. In such cases, it seems that it is better for one to use more advanced statistical tools to assess a signal's nonstationary. More specifically, nonstationary tests coming from the context of Econometrics and time-series analysis can be employed. In order to use such proxies more extensively, one should build trust on their indications by understanding the mechanism under which they perform. This work concentrates on the Augmented Dickey-Fuller (ADF) nonstationary test and emphasis is placed on the hypothesis (unit root) under which performs its assessment. In brief, a series of simulations are generated, and based on dimensional analysis, it is shown that the ADF test is essentially counts the number of cycles/periods of the dominant periodic component. Its indications depend on the number of observations/cycles, the normalised frequency of the signal, the sampling rate and signal-to-noise ratio (SNR). The most important conclusion made is that knowing the sampling frequency of any given signal, a critical frequency in Hz can be found, which can be derived from the critical normalised one, as a function of the number of cycles, which can be directly used to judge if the signal is stationary or not. In other words, this investigation provides an answer to the question; after how many cycles of continuous monitoring (i.e. days), an SHM signal can be judged as stationary? As well as considering nonstationary in a general way, this thesis returns to the main issue of data normalisation. To begin with, a laboratory test is performed, at the laboratory (Jonas lab) of Sheffield University, on an aluminium truss bridge model manufactured there. In particular, that involved vibration analysis of the truss bridge inside an environmental chamber, which simulated varying temperature conditions from -10 to 20 deg. Celsius, while damage introduced on the structure by the removal of bolts and connecting brackets in two locations of the model. This experiment provided interesting results to discuss further the impact of EOVs on data coming from the monitoring of a small-scale structure. After that, the thesis discusses the use of Johansen's approach to cointegration in the context of SHM, demonstrate its use on the laboratory truss bridge data and provides a review of the available methods that can be used to monitor the cointegration residual. The latter is the stationary signal provided by cointegration which is free from EOVs and capable for novelty detection. The methodologies reviewed are various SPC charts, while also the use of ADF is also explored, providing extensive discussion. Furthermore, an important conclusion from the SHM literature is that the impact of EOVs on SHM signals can occur on widely disparate time scales. Therefore, the quantification and elimination of these impacts from signals is not an easy procedure and prior knowledge is needed. For such purposes, refined means originated from the field of signal processing can be used within SHM. Of particular interest here is the concept of multiresolution analysis (MRA), which has been used in SHM in order to decompose a given signal in its frequency components (different time-scales) and evaluate the damage sensitivity of each one, employing the Johansen's approach to cointegration, which is able to project out the impact of EOVs from multiple SHM series. A more principled way to perform MRA is proposed here, in order to decompose SHM signals, by introducing two additional steps. The first step is the ADF test, which can be used to assess each one of the MRA levels in terms of nonstationary. In this way, a critical decomposition level (L*) can be found and used to decompose the original SHM signal into a non-stationary and stationary part. The second step introduced includes the use of autocorrelation functions (ACFs) in order to test the stationary MRA levels and identify those that can be considered as delta-correlated. These levels can be used to form a noisy component inside the stationary one. Assuming that all the aforementioned steps are confirmed, the original signal can now be decomposed into a stationary, a mean, a non-stationary and a noisy component. The proposed decomposition can be of great interest not only for SHM purposes, but also in the general context of time-series analysis, as it provides a principled way to perform MRA. The proposed analysis is demonstrated on natural frequency and temperature data of the Z24 Bridge. All in all, the thesis tries to answer the following questions: 1) How an SHM signal can be judged as non-stationary/stationary and under which assumptions? 2) After how many cycles of continuous monitoring an SHM signal that is initially non-stationary becomes stationary? 3) Which are the main drivers of this nonstationary (i.e. EOVs, abnormality/damage or others)? 4) How one can distinguish the effect of EOVs from this of abnormality/damage? 5) How one can project out the confounding} influence of EOVs from an SHM signal and provide a signal that is capable for novelty detection? 6) Is it possible to decompose an SHM signal and study each one of these components separately? 7) Which of these components are mostly affected by EOVs, which from damage and which do not include important information in terms of novelty detection? Understanding and answering all the aforementioned questions can help on identifying signals that can be monitored over time or in data windows, ensuring that stationarity achieved, employing methodologies such as statistical process control (SPC) for novelty detection.
538

An experimental investigation of the fracture behaviour of particulate toughened epoxies

Jones, Stephen January 2013 (has links)
The addition of thermoplastic particles in the interlaminar region of a carbon-epoxy composite is known to generally improve mode I and mode II fracture toughnesses and also improve damage tolerance. However, the mechanisms of toughening are poorly understood. Most studies so far have selected one interlaminar toughening particle (ILTP) and studied the effect of particle size and/or spatial distribution. A missing link in the continued development of interlaminar toughened systems is study into the effect of the particle material and interface. Whilst matrix mode I toughness is a good indication of composite mode I toughness, no such relationship has previously been established or investigated for mode II. This work focuses on measuring fracture parameters in bulk, particulate toughened epoxy resins using an experimental approach. Digital image correlation tools were used to determine displacement fields around the crack tip at a small scale, in both standard, pure mode I specimens and mixedmode I/II specimens for five resin formulations, four with ILTP and one without. Mixed-mode stress intensity factors and the non-singular T-stress were extracted from the displacement fields using the Williams' crack tip stress solutions. The Tstress term governs crack path stability and it was found that this term can be used successfully to differentiate between the crack path behaviour at fracture of the different materials studied. A new methodology was developed to determine an apparent mode II toughness for resins and this parameter was found to be directly proportional to the composite mode II toughness. This is believed to be the first time a relationship has been established between the mode II performance of particulate toughened resins and their composites. The novel parameters developed here allow inference of mode II composite behaviour from resin tests. Therefore this work is a significant boost to the continued development of interlaminar toughened composites.
539

The measurement and detection of residual stresses in cold-drawn tubes

Loxley, Eric Marshall January 1955 (has links)
No description available.
540

A probabilistic framework for statistical shape models and atlas construction : application to neuroimaging

Ravikumar, Nishant January 2017 (has links)
Accurate and reliable registration of shapes and multi-dimensional point sets describing the morphology/physiology of anatomical structures is a pre-requisite for constructing statistical shape models (SSMs) and atlases. Such statistical descriptions of variability across populations (regarding shape or other morphological/physiological quantities) are based on homologous correspondences across the multiple samples that comprise the training data. The notion of exact correspondence can be ambiguous when these data contain noise and outliers, missing data, or significant and abnormal variations due to pathology. But, these phenomena are common in medical image-derived data, due, for example, to inconsistencies in image quality and acquisition protocols, presence of motion artefacts, differences in pre-processing steps, and inherent variability across patient populations and demographics. This thesis therefore focuses on formulating a unified probabilistic framework for the registration of shapes and so-called \textit{generalised point sets}, which is robust to the anomalies and variations described. Statistical analysis of shapes across large cohorts demands automatic generation of training sets (image segmentations delineating the structure of interest), as manual and semi-supervised approaches can be prohibitively time consuming. However, automated segmentation and landmarking of images often result in shapes with high levels of outliers and missing data. Consequently, a robust method for registration and correspondence estimation is required. A probabilistic group-wise registration framework for point-based representations of shapes, based on Student’s t-mixture model (TMM) and a multi-resolution extension to the same (mrTMM), are formulated to this end. The frameworks exploit the inherent robustness of Student’s t-distributions to outliers, which is lacking in existing Gaussian mixture model (GMM)-based approaches. The registration accuracy of the proposed approaches was quantitatively evaluated and shown to outperform the state-of-the-art, using synthetic and clinical data. A corresponding improvement in the quality of SSMs generated subsequently was also shown, particularly for data sets containing high levels of noise. In general, the proposed approach requires fewer user specified parameters than existing methods, whilst affording much improved robustness to outliers. Registration of generalised point sets, which combine disparate features such as spatial positions, directional/axial data, and scalar-valued quantities, was studied next. A hybrid mixture model (HMM), combining different types of probability distributions, was formulated to facilitate the joint registration and clustering of multi-dimensional point sets of this nature. Two variants of the HMM were developed for modelling: (1) axial data; and (2) directional data. The former, based on a combination of Student’s t, Watson and Gaussian distributions, was used to register hybrid point sets comprising magnetic resonance diffusion tensor image (DTI)-derived quantities, such as voxel spatial positions (defining a region/structure of interest), associated fibre orientations, and scalar measures reflecting tissue anisotropy. The latter meanwhile, formulated using a combination of Student’s t and Von-Mises-Fisher distributions, is used for the registration of shapes represented as hybrid point sets comprising spatial positions and associated surface normal vectors. The Watson-variant of the HMM facilitates statistical analysis and group-wise comparisons of DTI data across patient populations, presented as an exemplar application of the proposed approach. The Fisher-variant of the HMM on the other hand, was used to register hybrid representations of shapes, providing substantial improvements over point-based registration approaches in terms of anatomical validity in the estimated correspondences.

Page generated in 0.0191 seconds