• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 180
  • 31
  • 25
  • 21
  • 16
  • 11
  • 8
  • 8
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 644
  • 644
  • 644
  • 135
  • 134
  • 123
  • 119
  • 107
  • 93
  • 85
  • 73
  • 70
  • 69
  • 57
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

<b>Benchmarking tool development for commercial buildings' energy consumption using machine learning</b>

Paniz Hosseini (18087004) 03 June 2024 (has links)
<p dir="ltr">This thesis investigates approaches to classify and anticipate the energy consumption of commercial office buildings using external and performance benchmarking to reduce the energy consumption. External benchmarking in the context of building energy consumption considers the influence of climate zones that significantly impact a building's energy needs. Performance benchmarking recognizes that different types of commercial buildings have distinct energy consumption patterns. Benchmarks are established separately for each building type to provide relevant comparisons.</p><p dir="ltr">The first part of this thesis is about providing a benchmarking baseline for buildings to show their consumption levels. This involves simulating the buildings based on standards and developing a model based on real-time results. Software tools like Open Studio and Energy Plus were utilized to simulate buildings representative of different-sized structures to organize the benchmark energy consumption baseline. These simulations accounted for two opposing climate zones—one cool and humid and one hot and dry. To ensure the authenticity of the simulation, details, which are the building envelope, operational hours, and HVAC systems, were matched with ASHRAE standards.</p><p dir="ltr">Secondly, the neural network machine learning model is needed to predict the consumption of the buildings based on the trend data came out of simulation part, by training a comprehensive set of environmental characteristics, including ambient temperature, relative humidity, solar radiation, wind speed, and the specific HVAC (Heating, Ventilation, and Air Conditioning) load data for both heating and cooling of the building. The model's exceptional accuracy rating of 99.54% attained across all, which comes from the accuracy of training, validation, and test about 99.6%, 99.12%, and 99.42%, respectively, and shows the accuracy of the predicted energy consumption of the building. The validation check test confirms that the achieved accuracy represents the optimal performance of the model. A parametric study is done to show the dependency of energy consumption on the input, including the weather data and size of the building, which comes from the output data of machine learning, revealing the reliability of the trained model. Establishing a Graphic User Interface (GUI) enhances accessibility and interaction for users. In this thesis, we have successfully developed a tool that predicts the energy consumption of office buildings with an impressive accuracy of 99.54%. Our investigation shows that temperature, humidity, solar radiation, wind speed, and the building's size have varying impacts on energy use. Wind speed is the least influential component for low-rise buildings but can have a more substantial effect on high-rise structures.</p>
212

Understanding Human Imagination Through Diffusion Model

Pham, Minh Nguyen 22 December 2023 (has links)
This paper develops a possible explanation for a facet of visual processing inspired by the biological brain's mechanisms for information gathering. The primary focus is on how humans observe elements in their environment and reconstruct visual information within the brain. Drawing on insights from diverse studies, personal research, and biological evidence, the study posits that the human brain captures high-level feature information from objects rather than replicating exact visual details, as is the case in digital systems. Subsequently, the brain can either reconstruct the original object using its specific features or generate an entirely new object by combining features from different objects, a process referred to as "Imagination." Central to this process is the "Imagination Core," a dedicated unit housing a modified diffusion model. This model allows high-level features of an object to be employed for tasks like recreating the original object or forming entirely new objects from existing features. The experimental simulation, conducted with an Artificial Neural Network (ANN) incorporating a Convolutional Neural Network (CNN) for high-level feature extraction within the Information Processing Network and a Diffusion Network for generating new information in the Imagination Core, demonstrated the ability to create novel images based solely on high-level features extracted from previously learned images. This experimental outcome substantiates the theory that human learning and storage of visual information occur through high-level features, enabling us to recall events accurately, and these details are instrumental in our imaginative processes. / Master of Science / This study takes inspiration from how our brains process visual information to explore how we see and imagine things. Think of it like a digital camera, but instead of saving every tiny detail, our brains capture the main features of what we see. These features are then used to recreate images or even form entirely new ones through a process called "Imagination." It is like when you remember something from the past – your brain does not store every little detail but retains enough to help you recall events and create new ideas. In our study, we created a special unit called the "Imagination Core," using a modified diffusion model, to simulate how this process works. We trained an Artificial Neural Network (ANN) with a Convolutional Neural Network (CNN) to extract the main features of objects and a Diffusion Network to generate new information in the Imagination Core. The exciting part? We were able to make the computer generate new images it had never seen before, only using details it learned from previous images. This supports the idea that, like our brains, focusing on important details helps us remember things and fuels our ability to imagine new things.
213

Accuracy Enhancement of Robots using Artificial Intelligence

Johannes Evers, Till January 2024 (has links)
Robots have an underlying model to control their joints with the aim of reaching a specific pose. The accuracy of a robot is based on the underlying model and its parameters. The parameters of the underlying model of a robot are based on the ideal geometry and set by the manufacturer. If the parameters do not represent the physical robot accurately, the movements of the robot become inaccurate. Methods to optimize the parameters to represent the physical robot more accurately exist and result in an accuracy enhancement. Nevertheless, the underlying model of the manufacturer remains of analytical form and therefore has a limited complexity which hinders the model to represent arbitrary non-linearities and higher degree relations. To further enhance the accuracy of a robot by using a model with a higher complexity, this work investigates the use of a model of the inverse kinematics based on Artificial Intelligence (AI). The accuracy is investigated for a robot with an attached tool. In a first step, the development and initial evaluation of a suitable AI model is conducted in a simulated environment. Afterwards, the uncompensated accuracy of the robot with the tool is assessed and measurements are recorded. Using the measurements, an AI model based on the measurements of physical robot. The model is evaluated on the physical robot with a tool to quantify the achieved accuracy enhancement.
214

Precision Medicine Approach to Improving Reconstructive Surgery Outcomes for Breast Cancer Survivors

Degen, Katherine Emily 25 July 2018 (has links)
As the survival rate increases, the importance of quality of life post-cancer is increasing. This, in conjunction with genetic screening, has increase the number of breast reconstructions 36%. The most common complication causing revision of reconstructive surgery is the formation of a dense scar capsule around the silicone implant called capsular contracture. Nearly all patients will experience this complication, though with different degrees of response, ranging from moderate scarring to major disfigurement and pain at the implant site. Presently, there is no way to predict the degree of contraction capsule formation that individual patients will suffer prospectively, nor is there clinical approach to preventing this complication. Patient information and tissue was collected in a uniform manner to address these lingering problems. Clinical data was used to construct a predictive model which can accurately predict capsular contracture severity in breast reconstruction patients. Histological analysis demonstrated differences in structure and cell composition between different capsule severities. Of particular note, a new region was described which could serve as the communication interface between innate immune cells and fibroblasts. RNA-seq analysis identified 1029 significantly dysregulated genes in severe capsules. Pathway enrichment was then performed which highlights IL4/13 signaling, extracellular matrix organization, antigen presentation, and interferon signaling as importantly dysregulated pathways. These RNA results were also compared to various clinical and histological measurements to evaluate novel correlations. PVT-1, a long non-coding RNA associated with cancer, was strongly correlated to capsules formed after cancer removal. This suggests cancerous transformations of cell types that remain after the tumor is removed. Furthermore, transgelin and caspase 7 correlated to myofibroblasts density, suggesting an abnormal fibroblasts that are resistant to cell death and may have enhanced contractile abilities. Capsule formation is a complex process however, with well controlled clinical models quantitative differences can be found. These results serve as stepping stone for the field to move beyond retrospective clinical trials and pursue treatments and preventative measures. / Ph. D. / As the survival rate increases, the importance of quality of life post-cancer is increasing. This, in conjunction with genetic screening, has increase the number of breast reconstructions 36%. The most common complication causing revision of reconstructive surgery is the formation of a dense scar capsule around the silicone implant called capsular contracture. Nearly all patients will experience this complication, though with different degrees of response, ranging from moderate scarring to major disfigurement and pain at the implant site. Presently, there is no way to predict the degree of contraction capsule formation that individual patients will suffer prospectively, nor is there clinical approach to preventing this complication. Patient information and tissue was collected in a uniform manner to address these lingering problems. Clinical data was used to construct a predictive model which can accurately predict capsular contracture severity in breast reconstruction patients. Histological analysis demonstrated differences in structure and cell composition between different capsule severities. Of particular note, a new region was described which could serve as the communication interface between innate immune cells and fibroblasts. RNA-seq analysis identified 1029 significantly dysregulated genes in severe capsules. Pathway enrichment was then performed which highlights IL4/13 signaling, extracellular matrix organization, antigen presentation, and interferon signaling as importantly dysregulated pathways. These RNA results were also compared to various clinical and histological measurements to evaluate novel correlations. PVT-1, a long non-coding RNA associated with cancer, was strongly correlated to capsules formed after cancer removal. This suggests cancerous transformations of cell types that remain after the tumor is removed. Furthermore, transgelin and caspase 7 correlated to myofibroblasts density, suggesting an abnormal fibroblasts that are resistant to cell death and may have enhanced contractile abilities. Capsule formation is a complex process however, with well controlled clinical models quantitative differences can be found. These results serve as stepping stone for the field to move beyond retrospective clinical trials and pursue treatments and preventative measures.
215

Short term load forecasting using neural networks

Nigrini, L.B., Jordaan, G.D. January 2013 (has links)
Published Article / Several forecasting models are available for research in predicting the shape of electric load curves. The development of Artificial Intelligence (AI), especially Artificial Neural Networks (ANN), can be applied to model short term load forecasting. Because of their input-output mapping ability, ANN's are well-suited for load forecasting applications. ANN's have been used extensively as time series predictors; these can include feed-forward networks that make use of a sliding window over the input data sequence. Using a combination of a time series and a neural network prediction method, the past events of the load data can be explored and used to train a neural network to predict the next load point. In this study, an investigation into the use of ANN's for short term load forecasting for Bloemfontein, Free State has been conducted with the MATLAB Neural Network Toolbox where ANN capabilities in load forecasting, with the use of only load history as input values, are demonstrated.
216

Comparison of porous media permeability : experimental, analytical and numerical methods

Mahdi, Faiz M. January 2014 (has links)
Permeability is an important property of a porous medium and it controls the flow of fluid through the medium. Particle characteristics are known to affect the value of the permeability. However, experimental investigation of the effects of these particle characteristics on the value of permeability is time-consuming while analytical predictions have been reported to overestimate it leading to inefficient design. To overcome these challenges, there is the need for the development of new models that can predict permeability based on input variables and process conditions. In this research, data from experiments, Computational Fluid Dynamics (CFD) and literature were employed to develop new models using Multivariate Regression (MVR) and Artificial Neural Networks (ANNs). Experimental measurements of permeability were performed using high and low shear separation processes. Particles of talc, calcium carbonate and titanium dioxide (P25) were used in order to study porous media with different particle characteristics and feed concentrations. The effects of particle characteristics and initial stages of filtration as well as the reliability of filtration techniques (constant pressure filtration, CPF and constant rate filtration, CRF) were investigated. CFD simulations were also performed of porous media for different particle characteristics to generate additional data. The regression and ANN models also included permeability data taken from reliable literature sources. Particle cluster formation was only found in P25 leading to an increase of permeability especially in sedimentation. The constant rate filtration technique was found more suitable for permeability measurement than constant pressure. Analyses of data from the experiments, CFD and correlation showed that Sauter mean diameter (ranging from 0.2 to 168 μm), the fines ratio (x50/x10), particle shape (following Heywood s approach), and voidage of the porous medium (ranging from 98.5 to 37.2%) were the significant parameters for permeability prediction. Using these four parameters as inputs, performance of models based on linear and nonlinear MVR as well as ANN were investigated together with the existing analytical models (Kozeny-Carman, K-C and Happel-Brenner, H-B). The coefficient of correlation (R2), root mean square error (RMSE) and average absolute error (AAE) were used as performance criteria for the models. The K-C and H-B are two-variable models (Sauter mean diameter and voidage) and two variables ANN and MVR showed better predictive performance. Furthermore, four-variable (Sauter mean diameter, the x50/x10, particle shape, and voidage) models developed from the MVR and ANN exhibit excellent performance. The AAE was found with K-C and H-B models to be 35 and 40%, respectively while the results of using ANN2 model reduced the AAE to 14%. The ANN4 model further decreased the AAE to approximately 9% compared to the measured results. The main reason for this reduced error was the addition of a shape coefficient and particle spread (fine ratio) in the ANN4 model. These two parameters are absent in the analytical relations, such as K-C and H-B models. Furthermore, it was found that using the ANN4 (4-5-1) model led to increase in the R2 value from 0.90 to 0.99 and significant decrease in the RMSE value from 0.121 to 0.054. Finally, the investigations and findings of this work demonstrate that relationships between permeability and the particle characteristics of the porous medium are highly nonlinear and complex. The new models possess the capability to predict the permeability of porous media more accurately owing to the incorporation of additional particle characteristics that are missing in the existing models.
217

A case-based reasoning methodology to formulating polyurethanes

Segura-Velandia, Diana M. January 2006 (has links)
Formulation of polyurethanes is a complex problem poorly understood as it has developed more as an art rather than a science. Only a few experts have mastered polyurethane (PU) formulation after years of experience and the major raw material manufacturers largely hold such expertise. Understanding of PU formulation is at present insufficient to be developed from first principles. The first principle approach requires time and a detailed understanding of the underlying principles that govern the formulation process (e.g. PU chemistry, kinetics) and a number of measurements of process conditions. Even in the simplest formulations, there are more that 20 variables often interacting with each other in very intricate ways. In this doctoral thesis the use of the Case-Based Reasoning and Artificial Neural Network paradigm is proposed to enable support for PUs formulation tasks by providing a framework for the collection, structure, and representation of real formulating knowledge. The framework is also aimed at facilitating the sharing and deployment of solutions in a consistent and referable way, when appropriate, for future problem solving. Two basic problems in the development of a Case-Based Reasoning tool that uses past flexible PU foam formulation recipes or cases to solve new problems were studied. A PU case was divided into a problem description (i. e. PU measured mechanical properties) and a solution description (i. e. the ingredients and their quantities to produce a PU). The problems investigated are related to the retrieval of former PU cases that are similar to a new problem description, and the adaptation of the retrieved case to meet the problem constraints. For retrieval, an alternative similarity measure based on the moment's description of a case when it is represented as a two dimensional image was studied. The retrieval using geometric, central and Legendre moments was also studied and compared with a standard nearest neighbour algorithm using nine different distance functions (e.g. Euclidean, Canberra, City Block, among others). It was concluded that when cases were represented as 2D images and matching is performed by using moment functions in a similar fashion to the approaches studied in image analysis in pattern recognition, low order geometric and Legendre moments and central moments of any order retrieve the same case as the Euclidean distance does when used in a nearest neighbour algorithm. This means that the Euclidean distance acts a low moment function that represents gross level case features. Higher order (moment's order>3) geometric and Legendre moments while enabling finer details about an image to be represented had no standard distance function counterpart. For the adaptation of retrieved cases, a feed-forward back-propagation artificial neural network was proposed to reduce the adaptation knowledge acquisition effort that has prevented building complete CBR systems and to generate a mapping between change in mechanical properties and formulation ingredients. The proposed network was trained with the differences between problem descriptions (i.e. mechanical properties of a pair of foams) as input patterns and the differences between solution descriptions (i.e. formulation ingredients) as the output patterns. A complete data set was used based on 34 initial formulations and a 16950 epochs trained network with 1102 training exemplars, produced from the case differences, gave only 4% error. However, further work with a data set consisting of a training set and a small validation set failed to generalise returning a high percentage of errors. Further tests on different training/test splits of the data also failed to generalise. The conclusion reached is that the data as such has insufficient common structure to form any general conclusions. Other evidence to suggest that the data does not contain generalisable structure includes the large number of hidden nodes necessary to achieve convergence on the complete data set.
218

PATTERN RECOGNITION IN CLASS IMBALANCED DATASETS

Siddique, Nahian A 01 January 2016 (has links)
Class imbalanced datasets constitute a significant portion of the machine learning problems of interest, where recog­nizing the ‘rare class’ is the primary objective for most applications. Traditional linear machine learning algorithms are often not effective in recognizing the rare class. In this research work, a specifically optimized feed-forward artificial neural network (ANN) is proposed and developed to train from moderate to highly imbalanced datasets. The proposed methodology deals with the difficulty in classification task in multiple stages—by optimizing the training dataset, modifying kernel function to generate the gram matrix and optimizing the NN structure. First, the training dataset is extracted from the available sample set through an iterative process of selective under-sampling. Then, the proposed artificial NN comprises of a kernel function optimizer to specifically enhance class boundaries for imbalanced datasets by conformally transforming the kernel functions. Finally, a single hidden layer weighted neural network structure is proposed to train models from the imbalanced dataset. The proposed NN architecture is derived to effectively classify any binary dataset with even very high imbalance ratio with appropriate parameter tuning and sufficient number of processing elements. Effectiveness of the proposed method is tested on accuracy based performance metrics, achieving close to and above 90%, with several imbalanced datasets of generic nature and compared with state of the art methods. The proposed model is also used for classification of a 25GB computed tomographic colonography database to test its applicability for big data. Also the effectiveness of under-sampling, kernel optimization for training of the NN model from the modified kernel gram matrix representing the imbalanced data distribution is analyzed experimentally. Computation time analysis shows the feasibility of the system for practical purposes. This report is concluded with discussion of prospect of the developed model and suggestion for further development works in this direction.
219

Numerical and artificial neural network modelling of friction stir welding

Wang, Hua January 2011 (has links)
This thesis is based on the PhD work of investigating the Friction Stir Welding process (FSW) with numerical and Artificial Neural Network (ANN) modelling methods. FSW was developed at TWI in 1991. As a relatively new technology it has great advantages in welding aluminium alloys which are difficult to weld with traditional welding processes. The aim of this thesis was the development of new modelling techniques to predict the thermal and deformation behaviour. To achieve this aim, a group of Gleeble experiments was conducted on 6082 and 7449 aluminium alloys, to investigate the material constitutive behaviour under high strainrate, near solidus conditions, which are similar to what the material experiences during the FSW process. By numerically processing the experimental data, new material constitutive constants were found for both alloys and used for the subsequent FSW modelling work. Importantly no significant softening was observed prior to the solidus temperature. One of the main problems with numerical modelling is determining the values of adjustable parameters in the model. Two common adjustable parameters are the heat input and the coefficients that describe the heat loss to the backing bar. To predict these coefficients more efficiently a hybrid model was created which involved linking a conventional numerical model to an ANN model. The ANN was trained using data from the numerical model. Then thermal profiles were abstracted (summarised) and used as inputs; and the adjustable parameters were used as outputs. The trained ANN could then use abstracted thermal profiles from welding experiments to predict the adjustable parameters in the model. The first stage involved developing a simplified FE thermal model which represents a typical welding process. It was used to find the coefficients that describe the heat loss to the backing bar, and the amount of power applied in the model. Five different thermal boundary conditions were studied, including both convective and ones that included the backing bar with a contact gap conductance. Three approaches for abstracting the thermal curves and using as inputs to the ANN were compared. In the study, the characteristics of the ANN model, such as the ANN topology and gradient descent method, were evaluated for each boundary condition for understanding of their influences to the prediction. The outcomes of the study showed that the hybrid model technique was able to determine the adjustable parameters in the model effectively, although the accuracy depended on several factors. One of the most significant effects was the complexity of the boundary condition. While a single factor boundary condition (e.g. constant convective heat loss) could be predicted easily, the boundary condition with two factors proved more difficult. The method for inputting the data into the ANN had a significant effect on the hybrid model performance. A small number of inputs could be used for the single factor boundary condition, while two factors boundary conditions needed more inputs. The influences from the characteristics of the ANN model were smaller, but again thermal model with simpler boundary condition required a less complex ANN model to achieve an accurate prediction, while models with more complex boundary conditions would need a more sophisticated ANN model. In the next chapter, the hybrid method was applied to a FSW process model developed for the Flexi-stir FSW machine. This machine has been used to analyse the complex phase changes that occur during FSW with synchrotron radiation. This unique machine had a complex backing bar system involving heat transfer from the aluminium alloy workpiece to the copper and steel backing bars. A temperature dependent contact gap conductance which also depends on the material interface type was used. During the investigation, the ANN model topologies (i.e. GFF and MFF) were studied to find the most effective one. Different abstracting methods for the thermal curves were also compared to explore which factors (e.g. the peak temperature in the curve, cooling slope of a curve) were more important to be used as an input. According to close matching between the simulation and experimental thermal profiles, the hybrid model can predict both the power and thermal boundary condition between the workpiece and backing bar. The hybrid model was applied to six different travel speeds, hence six sets of heat input and boundary condition factors were found. A universal set was calculated from the six outcomes and a link was discovered between the accuracy of the temperature predictions and the plunge depth for the welds. Finally a model with a slip contact condition between the tool and workpiece was used to investigate how the material flow behaviour was affected by the slip boundary condition. This work involved aluminium alloys 6082-T6 and 7449-T7, which have very different mechanical properties. The application of slip boundary condition was found to significantly reduce the strain-rate, compared to a stick condition. The slip condition was applied to the Flexi-stir FSW experiments, and the results indicated that a larger deformation region may form with the slip boundary condition. The thesis successfully demonstrates a new methodology for determining the adjustable parameters in a process model; improved understanding of the effect of slip boundary conditions on the flow behaviour during FSW and insight in to the behaviour of aluminium alloys at temperatures approaching the solidus and high strain-rates.
220

Economic evaluation, strategy and prediction studies of results into beef cattle production using different scenarios /

Romanzini, Eliéder Prates. January 2019 (has links)
Orientador: Euclides Braga Malheiros / Resumo: A pecuária de corte brasileira tem sido pressionada no sentido de cada vez obter melhores resultados, o que força os proprietários a utilizar práticas e manejos específicos, os quais possibilitarão a manutenção dentro da atividade. Este estudo teve como objetivos avaliar o uso de inteligência artificial, mais especificamente redes neurais artificiais (RNA), para predizer resultados futuros tanto da produção de pasto quanto animal. Determinar dentre diversos cenários de recria e terminação de bovinos de corte em pastagens tropicais, qual foi o melhor cenário no que diz respeito aos resultados econômicos. Avaliar dentre diferentes doses de adubação nitrogenada, qual foi aquela que retornou melhores índices econômicos. As RNA se mostraram melhores que as regressões normalmente utilizadas para predizer as produções de pastagem (valores médios obtidos pelo uso das RNA foram 0,84; 0,78 e 0,75 para massa de forragem, porcentagens de folha e colmo, versus 0,74; 0,39 e 0,50 obtidos usando regressão linear múltipla) e animal (0,72 [RNA] e 0,67 regressão). No estudo referente aos cenários, os melhores resultados foram obtidos quando utilizado apenas sal mineral (lucratividade de 26,3%; período de “payback” simples igual à 11 ciclos e taxa interna de retorno de 9,30%) na recria dos bovinos de corte e na terminação, quando as variáveis climáticas possibilitaram via manejo de pastagem o uso de maior taxa de lotação (3,18 UA ha-1) na área. Quando avaliados os efeitos das doses de adubação n... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Brazilian beef cattle has been under pression to obtain better results, which drives owners to use specific practices and management, which will allow the maintenance within livestock. This study aimed to evaluate use of artificial intelligence, specifically artificial neural networks (ANN), to predict future results both forage and animal productions. Determine between a lot of rearing and finishing phase scenarios of beef cattle production using tropical pastures, how was the best scenario considering economic results. Evaluate between different nitrogen fertilizers levels, how was there obtained best economic indexes. The ANN was better than regressions normally used to predict forage production (mean values obtained by ANN use were 0.84, 0.78 and 0.75 for forage mass, leaf and stem percentages, versus 0.74, 0.39 and 0.50 obtained using multiple linear regression) and animal (0.72 [ANN] and 0.67 regression). Into study about scenarios, the best results were obtained when used mineral mix just (profitability of 26.3%, simple payback period equal to 11 cycles and internal return ratio of 9.30%) during rearing phase of beef cattle. During finishing phase, the best results occurred when weather variables allowed by pasture handled, the use of higher stocking rate (3.18 AU ha-1) into area. The evaluation of economic results caused by different nitrogen fertilizer levels. Allowed to say that was possible to observe that there was linear increase both on costs, and gross revenue,... (Complete abstract click electronic access below) / Doutor

Page generated in 0.0998 seconds