31 |
Automated Tools for Accelerating Development of Combustion ModellingYalamanchi, Kiran K. 09 1900 (has links)
The ever-increasing focus of policy-makers on environmental issues are pushing the combustion community towards making combustion cleaner by optimizing the combustion equipment in order to reduce emissions, improve efficiency and satisfy the increasing energy demand. A major part of this involves advancing modelling capabilities of these complex combustion systems, which is a combination of computational fluid dynamics with detailed chemical kinetic models. A chemical kinetic model comprises of a series of elementary reactions with corresponding kinetic rate parameters and species thermodynamic and transport data. The predictive capability of these models depends on the accuracy to which individual chemical reaction rates, thermodynamic and transport parameters are known. A minor fraction of the rate constants and thermodynamic properties in the widely used kinetic mechanisms are experimentally derived or theoretically calculated. The remaining are approximated using rate rules and group additivity methods respectively for rate constants and thermodynamic properties. Recent works have highlighted the need for error checking when preparing such models using the approximations, but a useful community tool to perform such analysis is missing.
In the initial part of this work, we developed a simple online tool to screen chemical kinetic mechanisms for bimolecular reactions exceeding collision limits. Furthermore, issues related to unphysically fast time scales can remain an issue even if all bimolecular reactions are within collision limits. Therefore, we also presented a procedure to screen ultra-fast reaction time scales using computational singular perturbation (CSP). The screening of kinetic models is a necessary condition, however, not a sufficient one. Therefore, exploring new approaches for the simulation of complex chemically reacting systems are needed. This work focuses on developing new methods for estimating thermodynamic data efficiently and accurately, thereby increasing the compliance of forth-mentioned screening. Machine Learning (ML) has been increasingly becoming a tool of choice for regression, replacing traditional function fittings. Group additivity incorporates simple functions and derive constants with a certain existing data and use these functions to estimate the unknown values. ML algorithms does the same without fixing a specific function there by letting algorithm to learn the non-linearity from the training data itself. With the new data coming in with time, ML algorithms learn better and improves over time, whereas this need not necessarily happen with traditional methods.
In the first part of the study, data for standard enthalpy is collected from the literature sources and ML models are built on these databases. Two different models were built and studied for a straight-chain species and cyclic species dataset. Molecular descriptors are used as the datasets collected from literature are small for using any sparse representations as input. As expected, we observed a good improvement above group additivity method for these ML models. The improvement is observed to be more significant for cyclic species. With the motivation of ML models showing benefit over the group additivity method, a step further was taken. A homogenous and accurate dataset is necessary for building a ML model that can be used for generating the thermodynamic data for kinetic models. With this in mind, an accurate database for thermodynamic data is built from ab-intio calculations. The species in the dataset are taken from a detailed and well established mechanism to cover all the species in a typical kinetic mechanism. The calculations are performed at a high level of accuracy, in comparison to other similar datasets in literature. In the later part of this work, the dataset developed using ab-inito calculations is used for developing ML models. Unlike the ML models built from the literature datasets, this database consists of all the thermodynamic data required for kinetic models viz. standard enthalpy and standard entropy and heat capacity at 300 K and higher temperatures.
To numerically mimic real gasoline fuel reactivity, surrogates are proposed to facilitate advanced engine design and predict emissions by chemical kinetic modelling. However, chemical kinetic models could not always accurately predict non-regular emissions, e.g. aldehydes, ketones and unsaturated hydrocarbons, which are important air pollutants. Therefore, we propose to use machine-learning algorithms directly to achieve better predictions, circumventing the kinetic models. Combustion chemistry of fuels constituting of 10 neat fuels, 6 primary reference fuels (PRF) and 6 FGX surrogates were tested in a jet stirred reactor. Experimental data were collected in the same setup to maintain data uniformity and consistency. Measured species profiles of methane, ethylene, propylene, hydrogen, carbon monoxide and carbon dioxide are used for machine-learning model development. The model considers both chemical effects and physical conditions. Chemical effects are described as different functional groups, viz. primary, secondary, tertiary, and quaternary carbons in molecular structures, and physical conditions as temperature. Both the Machine-learning models used in this study showed a good prediction accuracy. By expanding the experimental database, machine-learning models can be further applied to many other hydrocarbons in future work, for the direct predictions.
|
32 |
Noninvasive assessment and classification of human skin burns using images of Caucasian and African patientsAbubakar, Aliyu, Ugail, Hassan, Bukar, Ali M. 20 March 2022 (has links)
Yes / Burns are one of the obnoxious injuries subjecting thousands to loss of life and physical defacement each year. Both high income and Third World countries face major evaluation challenges including but not limited to inadequate workforce, poor diagnostic facilities, inefficient diagnosis and high operational cost. As such, there is need to develop an automatic machine learning algorithm to noninvasively identify skin burns. This will operate with little or no human intervention, thereby acting as an affordable substitute to human expertise. We leverage the weights of pretrained deep neural networks for image description and, subsequently, the extracted image features are fed into the support vector machine for classification. To the best of our knowledge, this is the first study that investigates black African skins. Interestingly, the proposed algorithm achieves state-of-the-art classification accuracy on both Caucasian and African datasets.
|
33 |
Impact of Yeast Nutrient Supplementation Strategies on Hydrogen Sulfide Production during Cider FermentationMoore, Amy Nicole 18 May 2020 (has links)
Hydrogen Sulfide (H2S), is a negative off aroma produced during yeast fermentation and is common in cider and leads to consumer rejection. H2S has a very low odor detection threshold (ODT) and is often described as "rotten egg". H2S is produced when juice is deficient in yeast nutrients, such as amino acids and yeast assimilable nitrogen (YAN), which is a common problem in apples since they naturally low in nutrients. The purpose of this research was to investigate the effects of yeast nutrient addition to cider fermentation by adding four different nitrogen-rich supplements and evaluating the effects on H2S production, fermentation kinetics, and aroma quality during cider. Three yeast strains (M2, EC1118 and ICV OKAY), four yeast nutrients (Fermaid K, Fermaid O, Experimental Nutrient, and DAP) and single addition versus split addition of nutrient were tested. For single addition, all nutrient was added pre-fermentation and for split additions, the first addition was pre-fermentation and the second at one-third total soluble solid (TTS) depletion as measured by °Brix. Sensory evaluation was conducted on selected treatments. The greatest H2S was produced by M2 yeast strain (525 .63 ± 53.31 µg mL-1) while the least H2S on average was produced by EC1118 (118.26 ± 26.33 µg mL-1) and ICV OKAY produced an intermediate amount of H2S (209.26 ± 31.63 µg mL-1). Significant differences were observed between treatments and total H2S production within yeast strains. Yeast strain had the largest effect on H2S production. The second largest effect was yeast nutrient type. Classical text analysis of descriptions of cider aroma were evaluated and 25 attributes were chosen to describe the ciders. Check- all-that-apply (CATA), a rapid sensory technique that askes panelists, revealed that there was no clear pattern between variables tested. This work demonstrates that yeast nutrient type and yeast strain affect H2S production during cider fermentation. These findings provide a basis for improving the effectiveness of strategies used to prevent H2S production in cider fermentation. / Master of Science in Life Sciences / Cider, an alcoholic beverage made from fermenting apple juice, has grown in popularity and production in the United States in recent years. With increased in production and sales there is increase demand for high quality cider, but cider is prone to sensory faults. A common fault in cider aroma includes negative off aromas know as volatile sulfur compounds (VSCs). These aromas are often described as "rotten eggs", or "cabbage" and lead to consumer rejection of the product. One of the most recognized VSCs is hydrogen sulfide (H2S) which has a characteristic smell of "rotten eggs". These negative off aromas are thought to be produced during yeast fermentation under nutrient lacking conditions. Apples, depending on cultivar, ripeness, and other factors, naturally lack yeast assimilable nitrogen, vitamins, amino acids, and other nutrients needed for a successful yeast fermentation leading to off aromas. Yeast nutrients can be added to apple juice to increase nutrient availability, but little research has been focused on nutrient addition and timing of additions to prevent H2S production in cider. Most research focused on H2S production has been studied in wine must or grape juice. This knowledge may be limited when applying practices to apple juice due to differences in juice chemistry. Providing cider makers with specific scientific strategies to prevent off aromas, such as H2S, is important to the continued growth of the cider industry. This research is focused on exploring aroma quality and H2S prevention strategies in cider by evaluating how yeast nutrient addition via four exogenous nitrogen rich yeast nutrient and timing of yeast nutrient addition affect H2S production, fermentation kinetics, and consumer perception of aroma in cider fermentation.
|
34 |
Descriptors for Edaravone; Studies on its Structure, and Prediction of PropertiesLiu, Xiangli, Aghamohammadi, Amin, Afarinkia, Kamyar, Abraham, R.J., Acree, W.E. Jr, Abraham, M.H. 15 March 2021 (has links)
Yes / Literature solubilities and NMR and IR studies have been used to obtain properties or descriptors of edaravone. These show that edaravone has a significant hydrogen bond acidity so that it must exist in solution partly as the OH and NH forms, as found by Freyer et al. Descriptors have been assigned to the keto form which has a low hydrogen bond acidity, and which is the dominant form in nonpolar solvents. Physicochemical properties of the keto form can be been calculated such as solubilities in nonpolar solvents, partition coefficients from water to nonpolar solvents, and partition coefficients from air to biological phases.
|
35 |
Detecção de cenas em segmentos semanticamente complexos / Detection of scenes in semantically complex segmentsLopes, Bruno Lorenço 28 April 2014 (has links)
Diversas áreas da Computação (Personalização e Adaptação de Conteúdo, Recuperação de Informação, entre outras) se beneficiam da segmentação de vídeo em unidades menores de informação. A literatura apresenta diversos métodos e técnicas cujo objetivo é identificar essas unidades. Uma limitação é que tais técnicas não tratam o problema da detecção de cenas em segmentos semanticamente complexos, definidos como trechos de vídeo que apresentam mais de um assunto ou tema, e cuja semântica latente dificilmente pode ser determinada utilizando-se somente uma única mídia. Esses segmentos são muito relevantes, pois estão presentes em diversos domínios de vídeo, tais como filmes, noticiários e mesmo comerciais. A presente Dissertação de Mestrado propõe uma técnica de segmentação de vídeo capaz de identificar cenas em segmentos semanticamente complexos. Para isso utiliza a semântica latente alcançada com o uso de Bag of Visual Words para agrupar os segmentos de um vídeo. O agrupamento é baseado em multimodalidade, analisando-se características visuais e sonoras de cada vídeo e combinando-se os resultados por meio da estratégia fusão tardia. O presente trabalho demonstra a viabilidade técnica em reconhecer cenas em segmentos semanticamente complexos / Many Computational Science areas (Content Personalization and Adaptation, Information Retrieval, among other) benefit from video segmentation in smaller information units. The literature reports lots of techniques and methods, whose goal is to identify these units. One of these techniques limitations is that they dont handle scene detection in semantically complex segments, which are defined as video snippets that present more than one subject or theme, whose latent semantics can hardly be determined using only one media. Those segments are very relevant, since they are present in multiple video domains as movies, news and even television commercials. This Masters dissertation proposes a video scene segmentation technique able to detect scenes in semantically complex segments. In order to achieve this goal it uses latent semantics extracted by the Bag of VisualWords to group a video segments. This grouping process is based on multimodality, through the visual and aural features analysis, and their results combination using late fusion strategy. This works demonstrates technical feasibility in recognizing scenes in semantically complex segments
|
36 |
Detecção de cenas em segmentos semanticamente complexos / Detection of scenes in semantically complex segmentsBruno Lorenço Lopes 28 April 2014 (has links)
Diversas áreas da Computação (Personalização e Adaptação de Conteúdo, Recuperação de Informação, entre outras) se beneficiam da segmentação de vídeo em unidades menores de informação. A literatura apresenta diversos métodos e técnicas cujo objetivo é identificar essas unidades. Uma limitação é que tais técnicas não tratam o problema da detecção de cenas em segmentos semanticamente complexos, definidos como trechos de vídeo que apresentam mais de um assunto ou tema, e cuja semântica latente dificilmente pode ser determinada utilizando-se somente uma única mídia. Esses segmentos são muito relevantes, pois estão presentes em diversos domínios de vídeo, tais como filmes, noticiários e mesmo comerciais. A presente Dissertação de Mestrado propõe uma técnica de segmentação de vídeo capaz de identificar cenas em segmentos semanticamente complexos. Para isso utiliza a semântica latente alcançada com o uso de Bag of Visual Words para agrupar os segmentos de um vídeo. O agrupamento é baseado em multimodalidade, analisando-se características visuais e sonoras de cada vídeo e combinando-se os resultados por meio da estratégia fusão tardia. O presente trabalho demonstra a viabilidade técnica em reconhecer cenas em segmentos semanticamente complexos / Many Computational Science areas (Content Personalization and Adaptation, Information Retrieval, among other) benefit from video segmentation in smaller information units. The literature reports lots of techniques and methods, whose goal is to identify these units. One of these techniques limitations is that they dont handle scene detection in semantically complex segments, which are defined as video snippets that present more than one subject or theme, whose latent semantics can hardly be determined using only one media. Those segments are very relevant, since they are present in multiple video domains as movies, news and even television commercials. This Masters dissertation proposes a video scene segmentation technique able to detect scenes in semantically complex segments. In order to achieve this goal it uses latent semantics extracted by the Bag of VisualWords to group a video segments. This grouping process is based on multimodality, through the visual and aural features analysis, and their results combination using late fusion strategy. This works demonstrates technical feasibility in recognizing scenes in semantically complex segments
|
37 |
Trajectory-based Descriptors for Action Recognition in Real-world VideosNarayan, Sanath January 2015 (has links) (PDF)
This thesis explores motion trajectory-based approaches to recognize human actions in
real-world, unconstrained videos. Recognizing actions is an important task in applications
such as video retrieval, surveillance, human-robot interactions, analysis of sports videos, summarization of videos, behaviour monitoring, etc. There has been a considerable amount of research done in this regard. Earlier work used to be on videos captured by static cameras where it was relatively easy to recognise the actions. With more videos being captured by moving cameras, recognition of actions in such videos with irregular camera motion is still a challenge in unconstrained settings with variations in scale, view, illumination, occlusion and unrelated motions in the background. With the increase in videos being captured from wearable or head-mounted cameras, recognizing actions in egocentric videos is also explored in this thesis.
At first, an effective motion segmentation method to identify the camera motion
in videos captured by moving cameras is explored. Next, action recognition in videos
captured in normal third-person view (perspective) is discussed. Further, the action recognition approaches for first-person (egocentric) views are investigated. First-person videos are often associated with frequent unintended camera motion. This is due to the motion of the head resulting in the motion of the head-mounted cameras (wearable cameras). This is followed by recognition of actions in egocentric videos in a multicamera setting. And lastly, novel feature encoding and subvolume sampling (for “deep” approaches) techniques are explored in the context of action recognition in videos.
The first part of the thesis explores two effective segmentation approaches to identify
the motion due to camera. The first approach is based on curve fitting of the motion
trajectories and finding the model which best fits the camera motion model. The curve
fitting approach works when the trajectories generated are smooth enough. To overcome
this drawback and segment trajectories under non-smooth conditions, a second approach
based on trajectory scoring and grouping is proposed. By identifying the instantaneous
dominant background motion and accordingly aggregating the scores (denoting the
“foregroundness”) along the trajectory, the motion that is associated with the camera can
be separated from the motion due to foreground objects. Additionally, the segmentation result has been used to align videos from moving cameras, resulting in videos that seem to be captured by nearly-static cameras.
In the second part of the thesis, recognising actions in normal videos captured from
third-person cameras is investigated. To this end, two kinds of descriptors are explored.
The first descriptor is the covariance descriptor adapted for the motion trajectories. The covariance descriptor for a trajectory encodes the co-variations of different features along the trajectory’s length. Covariance, being a second-order encoding, encodes information of the trajectory that is different from that of the first-order encoding. The second
descriptor is based on Granger causality. The novel causality descriptor encodes the
“cause and effect” relationships between the motion trajectories of the actions. This
type of interaction descriptors captures the causal inter-dependencies among the motion
trajectories and encodes complimentary information different from those descriptors
based on the occurrence of features. The causal dependencies are traditionally computed on time-varying signals. We extend it further to capture dependencies between spatiotemporal signals and compute generalised causality descriptors which perform better than their traditional counterparts.
An egocentric or first-person video is captured from the perspective of the personof-interest (POI). The POI wears a camera and moves around doing his/her activities.
This camera records the events and activities as seen by him/her. The POI who is performing actions or activities is not seen by the camera worn by him/her. Activities
performed by the POI are called first-person actions and third-person actions are those
done by others and observed by the POI. The third part of the thesis explores action
recognition in egocentric videos. Differentiating first-person and third-person actions is important when summarising/analysing the behaviour of the POI. Thus, the goal is to
recognise the action and the perspective from which it is being observed. Trajectory
descriptors are adapted to recognise actions along with the motion trajectory ranking
method of segmentation as pre-processing step to identify the camera motion. The motion
segmentation step is necessary to remove unintended head motion (camera motion) during
video capture. To recognise actions and corresponding perspectives in a multi-camera
setup, a novel inter-view causality descriptor based on the causal dependencies between trajectories in different views is explored. Since this is a new problem being addressed, two first-person datasets are created with eight actions in third-person and first-person perspectives. The first dataset is a single camera dataset with action instances from first-person and third-person views. The second dataset is a multi-camera dataset with each action instance having multiple first-person and third-person views.
In the final part of the thesis, a feature encoding scheme and a subvolume sampling
scheme for recognising actions in videos is proposed. The proposed Hyper-Fisher Vector
feature encoding is based on embedding the Bag-of-Words encoding into the Fisher Vector
encoding. The resulting encoding is simple, effective and improves the classification
performance over the state-of-the-art techniques. This encoding can be used in place of the traditional Fisher Vector encoding in other recognition approaches. The proposed subvolume sampling scheme, used to generate second layer features in “deep” approaches for action recognition in videos, is based on iteratively increasing the size of the valid subvolumes in the temporal direction to generate newer subvolumes. The proposed sampling requires lesser number of subvolumes to be generated to “better represent” the actions and thus, is less computationally intensive compared to the original sampling scheme. The techniques are evaluated on large-scale, challenging, publicly available datasets. The Hyper-Fisher Vector combined with the proposed sampling scheme perform better than the state-of-the-art techniques for action classification in videos.
|
38 |
Ranking And Classification of Chemical Structures for Drug Discovery : Development of Fragment Descriptors And Interpolation SchemeKandel, Durga Datta January 2013 (has links) (PDF)
Deciphering the activity of chemical molecules against a pathogenic organism is an essential task in drug discovery process. Virtual screening, in which few plausible molecules are selected from a large set for further processing using computational methods, has become an integral part and complements the expensive and time-consuming in vivo and in vitro experiments. To this end, it is essential to extract certain features from molecules which in the one hand are relevant to the biological activity under consideration, and on the other are suitable for designing fast and robust algorithms. The features/representations are derived either from physicochemical properties or their structures in numerical form and are known as descriptors.
In this work we develop two new molecular-fragment descriptors based on the critical analysis of existing descriptors. This development is primarily guided by the notion of coding degeneracy, and the ordering induced by the descriptor on the fragments. One of these descriptors is derived based on the simple graph representation of the molecule, and attempts to encode topological feature or the connectivity pattern in a hierarchical way without discriminating atom or bond types. Second descriptor extends the first one by weighing the atoms (vertices) in consideration with the bonding pattern, valence state and type of the atom.
Further, the usefulness of these indices is tested by ranking and classifying molecules in two previously studied large heterogeneous data sets with regard to their anti-tubercular and other bacterial activity. This is achieved by developing a scoring function based on clustering using these new descriptors. Clusters are obtained by ordering the descriptors of training set molecules, and identifying the regions which are (almost) exclusively coming from active/inactive molecules. To test the activity of a new molecule, overlap of its descriptors in those cluster (interpolation) is weighted. Our results are found to be superior compared to previous studies: we obtained better classification performance by using only structural information while previous studies used both structural features and some physicochemical parameters. This makes our model simple, more interpretable and less vulnerable to statistical problems like chance correlation and over fitting. With focus on predictive modeling, we have carried out rigorous statistical validation.
New descriptors utilize primarily the topological information in a hierarchical way. This can have significant implications in the design of new bioactive molecules (inverse QSAR, combinatorial library design) which is plagued by combinatorial explosion due to use of large number of descriptors. While the combinatorial generation of molecules with desirable properties is still a problem to be satisfactorily solved, our model has potential to reduce the number of degrees of freedom, thereby reducing the complexity.
|
39 |
Adaptive Losses for Camera Pose SupervisionDahlqvist, Marcus January 2021 (has links)
This master thesis studies the learning of dense feature descriptors where camera poses are the only supervisory signal. The use of camera poses as a supervisory signal has only been published once before, and this thesis expands on this previous work by utilizing a couple of different techniques meant increase the robustness of the method, which is particularly important when not having access to ground-truth correspondences. Firstly, an adaptive robust loss is utilized to better differentiate inliers and outliers. Secondly, statistical properties during training are both enforced and adapted to, in an attempt to alleviate problems with uncertainties introduced by not having true correspondences available. These additions are shown to slightly increase performance, and also highlights some key ideas related to prediction certainty and robustness when working with camera poses as a supervisory signal. Finally, possible directions for future work are discussed.
|
40 |
Graphicacy within the secondary school curriculum : an exploration of continuity and progression of graphicacy in children aged 11 to 15Danos, Xenia January 2012 (has links)
Graphicacy is the fundamental human capability of communicating through still images. Graphicacy has been described as the fourth ace within education, alongside literacy, numeracy and articulacy. However, it has been neglected, both within education and the research field. This thesis investigates graphicacy and students learning, structured around 3 objectives: establishing what graphicacy is and how it is used in the school curriculum; demonstrating the wider significance of design and technology teaching and learning by collecting evidence of the importance of graphicacy across the curriculum; and establishing how the abilities to understand and create images affect students learning. A literature review was conducted focused on three areas. Firstly, identifying the meaning of graphicacy, elements contained within it and relevant prior studies including its use in different subject areas and image use within teaching. This formed the foundations for a new taxonomy of graphicacy. Secondly, the levels of drawing and developmental stages children go through were investigated and the need for further research on children s abilities aged 11 to 14 was identified. The well balanced arguments concerning the nature versus nurture debates are described. Thirdly, the methodology used to measure graphicacy, and map the results to reflect levels of different competencies were reviewed. A naturalistic and often opportunistic approach was followed in this research. The research methodology was based on the analysis of textbooks and later, on research within practice. The research included the development, validation and use of the taxonomy of graphicacy; case studies in Cyprus, the USA and England on identifying graphicacy use across the curriculum; and the creation of continuity and progression descriptors through the analysis of students work. This work covered: rendering, perspective drawing, logo designing, portrait drawing and star profile charts. Research methodologies developed and implemented for conducting co-research and the Delphi studies are also described. Through interviews with experts, the taxonomy was validated as an appropriate research tool to enable the identification of graphicacy use across the curriculum. These research studies identified links between design and technology and all other subject-areas studied. Similar patterns of graphicacy use were identified across 3 schools, one in Cyprus, USA and the UK. Photographs were the most commonly used graphicacy element across all subject areas studied. Design and technology within England was found to use the widest variety of graphicacy elements, providing evidence towards research objective 3; establishing how the ability to understand and create images affects students learning. Continuity and progression (CaP) descriptors were created for each area covered by this research. The success of the CaP descriptors relied on the technical complexity involved in the creation of each image. Some evidence was found concerning the limits of natural development and how nurture can further develop graphicacy skills. In addition, co-research as a methodology, its limitations and potentials are identified.
|
Page generated in 0.0475 seconds