• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 92
  • 16
  • 10
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 282
  • 83
  • 64
  • 39
  • 38
  • 32
  • 32
  • 32
  • 30
  • 27
  • 25
  • 25
  • 24
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

An Algorithm for the Detection of Handguns in Terahertz Images

Lingg, Andrew J. January 2008 (has links)
No description available.
32

TESTING THE USEFULNESS OF GEOMORPHIC VARIABLES AS PREDICTORS OF STREAM HEALTH: WESTERN ALLEGHENY PLATEAU

Meyer, Christine J. 12 October 2006 (has links)
No description available.
33

Automated Tools for Accelerating Development of Combustion Modelling

Yalamanchi, Kiran K. 09 1900 (has links)
The ever-increasing focus of policy-makers on environmental issues are pushing the combustion community towards making combustion cleaner by optimizing the combustion equipment in order to reduce emissions, improve efficiency and satisfy the increasing energy demand. A major part of this involves advancing modelling capabilities of these complex combustion systems, which is a combination of computational fluid dynamics with detailed chemical kinetic models. A chemical kinetic model comprises of a series of elementary reactions with corresponding kinetic rate parameters and species thermodynamic and transport data. The predictive capability of these models depends on the accuracy to which individual chemical reaction rates, thermodynamic and transport parameters are known. A minor fraction of the rate constants and thermodynamic properties in the widely used kinetic mechanisms are experimentally derived or theoretically calculated. The remaining are approximated using rate rules and group additivity methods respectively for rate constants and thermodynamic properties. Recent works have highlighted the need for error checking when preparing such models using the approximations, but a useful community tool to perform such analysis is missing. In the initial part of this work, we developed a simple online tool to screen chemical kinetic mechanisms for bimolecular reactions exceeding collision limits. Furthermore, issues related to unphysically fast time scales can remain an issue even if all bimolecular reactions are within collision limits. Therefore, we also presented a procedure to screen ultra-fast reaction time scales using computational singular perturbation (CSP). The screening of kinetic models is a necessary condition, however, not a sufficient one. Therefore, exploring new approaches for the simulation of complex chemically reacting systems are needed. This work focuses on developing new methods for estimating thermodynamic data efficiently and accurately, thereby increasing the compliance of forth-mentioned screening. Machine Learning (ML) has been increasingly becoming a tool of choice for regression, replacing traditional function fittings. Group additivity incorporates simple functions and derive constants with a certain existing data and use these functions to estimate the unknown values. ML algorithms does the same without fixing a specific function there by letting algorithm to learn the non-linearity from the training data itself. With the new data coming in with time, ML algorithms learn better and improves over time, whereas this need not necessarily happen with traditional methods. In the first part of the study, data for standard enthalpy is collected from the literature sources and ML models are built on these databases. Two different models were built and studied for a straight-chain species and cyclic species dataset. Molecular descriptors are used as the datasets collected from literature are small for using any sparse representations as input. As expected, we observed a good improvement above group additivity method for these ML models. The improvement is observed to be more significant for cyclic species. With the motivation of ML models showing benefit over the group additivity method, a step further was taken. A homogenous and accurate dataset is necessary for building a ML model that can be used for generating the thermodynamic data for kinetic models. With this in mind, an accurate database for thermodynamic data is built from ab-intio calculations. The species in the dataset are taken from a detailed and well established mechanism to cover all the species in a typical kinetic mechanism. The calculations are performed at a high level of accuracy, in comparison to other similar datasets in literature. In the later part of this work, the dataset developed using ab-inito calculations is used for developing ML models. Unlike the ML models built from the literature datasets, this database consists of all the thermodynamic data required for kinetic models viz. standard enthalpy and standard entropy and heat capacity at 300 K and higher temperatures. To numerically mimic real gasoline fuel reactivity, surrogates are proposed to facilitate advanced engine design and predict emissions by chemical kinetic modelling. However, chemical kinetic models could not always accurately predict non-regular emissions, e.g. aldehydes, ketones and unsaturated hydrocarbons, which are important air pollutants. Therefore, we propose to use machine-learning algorithms directly to achieve better predictions, circumventing the kinetic models. Combustion chemistry of fuels constituting of 10 neat fuels, 6 primary reference fuels (PRF) and 6 FGX surrogates were tested in a jet stirred reactor. Experimental data were collected in the same setup to maintain data uniformity and consistency. Measured species profiles of methane, ethylene, propylene, hydrogen, carbon monoxide and carbon dioxide are used for machine-learning model development. The model considers both chemical effects and physical conditions. Chemical effects are described as different functional groups, viz. primary, secondary, tertiary, and quaternary carbons in molecular structures, and physical conditions as temperature. Both the Machine-learning models used in this study showed a good prediction accuracy. By expanding the experimental database, machine-learning models can be further applied to many other hydrocarbons in future work, for the direct predictions.
34

Noninvasive assessment and classification of human skin burns using images of Caucasian and African patients

Abubakar, Aliyu, Ugail, Hassan, Bukar, Ali M. 20 March 2022 (has links)
Yes / Burns are one of the obnoxious injuries subjecting thousands to loss of life and physical defacement each year. Both high income and Third World countries face major evaluation challenges including but not limited to inadequate workforce, poor diagnostic facilities, inefficient diagnosis and high operational cost. As such, there is need to develop an automatic machine learning algorithm to noninvasively identify skin burns. This will operate with little or no human intervention, thereby acting as an affordable substitute to human expertise. We leverage the weights of pretrained deep neural networks for image description and, subsequently, the extracted image features are fed into the support vector machine for classification. To the best of our knowledge, this is the first study that investigates black African skins. Interestingly, the proposed algorithm achieves state-of-the-art classification accuracy on both Caucasian and African datasets.
35

Detecção de cenas em segmentos semanticamente complexos / Detection of scenes in semantically complex segments

Lopes, Bruno Lorenço 28 April 2014 (has links)
Diversas áreas da Computação (Personalização e Adaptação de Conteúdo, Recuperação de Informação, entre outras) se beneficiam da segmentação de vídeo em unidades menores de informação. A literatura apresenta diversos métodos e técnicas cujo objetivo é identificar essas unidades. Uma limitação é que tais técnicas não tratam o problema da detecção de cenas em segmentos semanticamente complexos, definidos como trechos de vídeo que apresentam mais de um assunto ou tema, e cuja semântica latente dificilmente pode ser determinada utilizando-se somente uma única mídia. Esses segmentos são muito relevantes, pois estão presentes em diversos domínios de vídeo, tais como filmes, noticiários e mesmo comerciais. A presente Dissertação de Mestrado propõe uma técnica de segmentação de vídeo capaz de identificar cenas em segmentos semanticamente complexos. Para isso utiliza a semântica latente alcançada com o uso de Bag of Visual Words para agrupar os segmentos de um vídeo. O agrupamento é baseado em multimodalidade, analisando-se características visuais e sonoras de cada vídeo e combinando-se os resultados por meio da estratégia fusão tardia. O presente trabalho demonstra a viabilidade técnica em reconhecer cenas em segmentos semanticamente complexos / Many Computational Science areas (Content Personalization and Adaptation, Information Retrieval, among other) benefit from video segmentation in smaller information units. The literature reports lots of techniques and methods, whose goal is to identify these units. One of these techniques limitations is that they dont handle scene detection in semantically complex segments, which are defined as video snippets that present more than one subject or theme, whose latent semantics can hardly be determined using only one media. Those segments are very relevant, since they are present in multiple video domains as movies, news and even television commercials. This Masters dissertation proposes a video scene segmentation technique able to detect scenes in semantically complex segments. In order to achieve this goal it uses latent semantics extracted by the Bag of VisualWords to group a video segments. This grouping process is based on multimodality, through the visual and aural features analysis, and their results combination using late fusion strategy. This works demonstrates technical feasibility in recognizing scenes in semantically complex segments
36

Detecção de cenas em segmentos semanticamente complexos / Detection of scenes in semantically complex segments

Bruno Lorenço Lopes 28 April 2014 (has links)
Diversas áreas da Computação (Personalização e Adaptação de Conteúdo, Recuperação de Informação, entre outras) se beneficiam da segmentação de vídeo em unidades menores de informação. A literatura apresenta diversos métodos e técnicas cujo objetivo é identificar essas unidades. Uma limitação é que tais técnicas não tratam o problema da detecção de cenas em segmentos semanticamente complexos, definidos como trechos de vídeo que apresentam mais de um assunto ou tema, e cuja semântica latente dificilmente pode ser determinada utilizando-se somente uma única mídia. Esses segmentos são muito relevantes, pois estão presentes em diversos domínios de vídeo, tais como filmes, noticiários e mesmo comerciais. A presente Dissertação de Mestrado propõe uma técnica de segmentação de vídeo capaz de identificar cenas em segmentos semanticamente complexos. Para isso utiliza a semântica latente alcançada com o uso de Bag of Visual Words para agrupar os segmentos de um vídeo. O agrupamento é baseado em multimodalidade, analisando-se características visuais e sonoras de cada vídeo e combinando-se os resultados por meio da estratégia fusão tardia. O presente trabalho demonstra a viabilidade técnica em reconhecer cenas em segmentos semanticamente complexos / Many Computational Science areas (Content Personalization and Adaptation, Information Retrieval, among other) benefit from video segmentation in smaller information units. The literature reports lots of techniques and methods, whose goal is to identify these units. One of these techniques limitations is that they dont handle scene detection in semantically complex segments, which are defined as video snippets that present more than one subject or theme, whose latent semantics can hardly be determined using only one media. Those segments are very relevant, since they are present in multiple video domains as movies, news and even television commercials. This Masters dissertation proposes a video scene segmentation technique able to detect scenes in semantically complex segments. In order to achieve this goal it uses latent semantics extracted by the Bag of VisualWords to group a video segments. This grouping process is based on multimodality, through the visual and aural features analysis, and their results combination using late fusion strategy. This works demonstrates technical feasibility in recognizing scenes in semantically complex segments
37

Trajectory-based Descriptors for Action Recognition in Real-world Videos

Narayan, Sanath January 2015 (has links) (PDF)
This thesis explores motion trajectory-based approaches to recognize human actions in real-world, unconstrained videos. Recognizing actions is an important task in applications such as video retrieval, surveillance, human-robot interactions, analysis of sports videos, summarization of videos, behaviour monitoring, etc. There has been a considerable amount of research done in this regard. Earlier work used to be on videos captured by static cameras where it was relatively easy to recognise the actions. With more videos being captured by moving cameras, recognition of actions in such videos with irregular camera motion is still a challenge in unconstrained settings with variations in scale, view, illumination, occlusion and unrelated motions in the background. With the increase in videos being captured from wearable or head-mounted cameras, recognizing actions in egocentric videos is also explored in this thesis. At first, an effective motion segmentation method to identify the camera motion in videos captured by moving cameras is explored. Next, action recognition in videos captured in normal third-person view (perspective) is discussed. Further, the action recognition approaches for first-person (egocentric) views are investigated. First-person videos are often associated with frequent unintended camera motion. This is due to the motion of the head resulting in the motion of the head-mounted cameras (wearable cameras). This is followed by recognition of actions in egocentric videos in a multicamera setting. And lastly, novel feature encoding and subvolume sampling (for “deep” approaches) techniques are explored in the context of action recognition in videos. The first part of the thesis explores two effective segmentation approaches to identify the motion due to camera. The first approach is based on curve fitting of the motion trajectories and finding the model which best fits the camera motion model. The curve fitting approach works when the trajectories generated are smooth enough. To overcome this drawback and segment trajectories under non-smooth conditions, a second approach based on trajectory scoring and grouping is proposed. By identifying the instantaneous dominant background motion and accordingly aggregating the scores (denoting the “foregroundness”) along the trajectory, the motion that is associated with the camera can be separated from the motion due to foreground objects. Additionally, the segmentation result has been used to align videos from moving cameras, resulting in videos that seem to be captured by nearly-static cameras. In the second part of the thesis, recognising actions in normal videos captured from third-person cameras is investigated. To this end, two kinds of descriptors are explored. The first descriptor is the covariance descriptor adapted for the motion trajectories. The covariance descriptor for a trajectory encodes the co-variations of different features along the trajectory’s length. Covariance, being a second-order encoding, encodes information of the trajectory that is different from that of the first-order encoding. The second descriptor is based on Granger causality. The novel causality descriptor encodes the “cause and effect” relationships between the motion trajectories of the actions. This type of interaction descriptors captures the causal inter-dependencies among the motion trajectories and encodes complimentary information different from those descriptors based on the occurrence of features. The causal dependencies are traditionally computed on time-varying signals. We extend it further to capture dependencies between spatiotemporal signals and compute generalised causality descriptors which perform better than their traditional counterparts. An egocentric or first-person video is captured from the perspective of the personof-interest (POI). The POI wears a camera and moves around doing his/her activities. This camera records the events and activities as seen by him/her. The POI who is performing actions or activities is not seen by the camera worn by him/her. Activities performed by the POI are called first-person actions and third-person actions are those done by others and observed by the POI. The third part of the thesis explores action recognition in egocentric videos. Differentiating first-person and third-person actions is important when summarising/analysing the behaviour of the POI. Thus, the goal is to recognise the action and the perspective from which it is being observed. Trajectory descriptors are adapted to recognise actions along with the motion trajectory ranking method of segmentation as pre-processing step to identify the camera motion. The motion segmentation step is necessary to remove unintended head motion (camera motion) during video capture. To recognise actions and corresponding perspectives in a multi-camera setup, a novel inter-view causality descriptor based on the causal dependencies between trajectories in different views is explored. Since this is a new problem being addressed, two first-person datasets are created with eight actions in third-person and first-person perspectives. The first dataset is a single camera dataset with action instances from first-person and third-person views. The second dataset is a multi-camera dataset with each action instance having multiple first-person and third-person views. In the final part of the thesis, a feature encoding scheme and a subvolume sampling scheme for recognising actions in videos is proposed. The proposed Hyper-Fisher Vector feature encoding is based on embedding the Bag-of-Words encoding into the Fisher Vector encoding. The resulting encoding is simple, effective and improves the classification performance over the state-of-the-art techniques. This encoding can be used in place of the traditional Fisher Vector encoding in other recognition approaches. The proposed subvolume sampling scheme, used to generate second layer features in “deep” approaches for action recognition in videos, is based on iteratively increasing the size of the valid subvolumes in the temporal direction to generate newer subvolumes. The proposed sampling requires lesser number of subvolumes to be generated to “better represent” the actions and thus, is less computationally intensive compared to the original sampling scheme. The techniques are evaluated on large-scale, challenging, publicly available datasets. The Hyper-Fisher Vector combined with the proposed sampling scheme perform better than the state-of-the-art techniques for action classification in videos.
38

Ranking And Classification of Chemical Structures for Drug Discovery : Development of Fragment Descriptors And Interpolation Scheme

Kandel, Durga Datta January 2013 (has links) (PDF)
Deciphering the activity of chemical molecules against a pathogenic organism is an essential task in drug discovery process. Virtual screening, in which few plausible molecules are selected from a large set for further processing using computational methods, has become an integral part and complements the expensive and time-consuming in vivo and in vitro experiments. To this end, it is essential to extract certain features from molecules which in the one hand are relevant to the biological activity under consideration, and on the other are suitable for designing fast and robust algorithms. The features/representations are derived either from physicochemical properties or their structures in numerical form and are known as descriptors. In this work we develop two new molecular-fragment descriptors based on the critical analysis of existing descriptors. This development is primarily guided by the notion of coding degeneracy, and the ordering induced by the descriptor on the fragments. One of these descriptors is derived based on the simple graph representation of the molecule, and attempts to encode topological feature or the connectivity pattern in a hierarchical way without discriminating atom or bond types. Second descriptor extends the first one by weighing the atoms (vertices) in consideration with the bonding pattern, valence state and type of the atom. Further, the usefulness of these indices is tested by ranking and classifying molecules in two previously studied large heterogeneous data sets with regard to their anti-tubercular and other bacterial activity. This is achieved by developing a scoring function based on clustering using these new descriptors. Clusters are obtained by ordering the descriptors of training set molecules, and identifying the regions which are (almost) exclusively coming from active/inactive molecules. To test the activity of a new molecule, overlap of its descriptors in those cluster (interpolation) is weighted. Our results are found to be superior compared to previous studies: we obtained better classification performance by using only structural information while previous studies used both structural features and some physicochemical parameters. This makes our model simple, more interpretable and less vulnerable to statistical problems like chance correlation and over fitting. With focus on predictive modeling, we have carried out rigorous statistical validation. New descriptors utilize primarily the topological information in a hierarchical way. This can have significant implications in the design of new bioactive molecules (inverse QSAR, combinatorial library design) which is plagued by combinatorial explosion due to use of large number of descriptors. While the combinatorial generation of molecules with desirable properties is still a problem to be satisfactorily solved, our model has potential to reduce the number of degrees of freedom, thereby reducing the complexity.
39

Adaptive Losses for Camera Pose Supervision

Dahlqvist, Marcus January 2021 (has links)
This master thesis studies the learning of dense feature descriptors where camera poses are the only supervisory signal. The use of camera poses as a supervisory signal has only been published once before, and this thesis expands on this previous work by utilizing a couple of different techniques meant increase the robustness of the method, which is particularly important when not having access to ground-truth correspondences. Firstly, an adaptive robust loss is utilized to better differentiate inliers and outliers. Secondly, statistical properties during training are both enforced and adapted to, in an attempt to alleviate problems with uncertainties introduced by not having true correspondences available. These additions are shown to slightly increase performance, and also highlights some key ideas related to prediction certainty and robustness when working with camera poses as a supervisory signal. Finally, possible directions for future work are discussed.
40

Graphicacy within the secondary school curriculum : an exploration of continuity and progression of graphicacy in children aged 11 to 15

Danos, Xenia January 2012 (has links)
Graphicacy is the fundamental human capability of communicating through still images. Graphicacy has been described as the fourth ace within education, alongside literacy, numeracy and articulacy. However, it has been neglected, both within education and the research field. This thesis investigates graphicacy and students learning, structured around 3 objectives: establishing what graphicacy is and how it is used in the school curriculum; demonstrating the wider significance of design and technology teaching and learning by collecting evidence of the importance of graphicacy across the curriculum; and establishing how the abilities to understand and create images affect students learning. A literature review was conducted focused on three areas. Firstly, identifying the meaning of graphicacy, elements contained within it and relevant prior studies including its use in different subject areas and image use within teaching. This formed the foundations for a new taxonomy of graphicacy. Secondly, the levels of drawing and developmental stages children go through were investigated and the need for further research on children s abilities aged 11 to 14 was identified. The well balanced arguments concerning the nature versus nurture debates are described. Thirdly, the methodology used to measure graphicacy, and map the results to reflect levels of different competencies were reviewed. A naturalistic and often opportunistic approach was followed in this research. The research methodology was based on the analysis of textbooks and later, on research within practice. The research included the development, validation and use of the taxonomy of graphicacy; case studies in Cyprus, the USA and England on identifying graphicacy use across the curriculum; and the creation of continuity and progression descriptors through the analysis of students work. This work covered: rendering, perspective drawing, logo designing, portrait drawing and star profile charts. Research methodologies developed and implemented for conducting co-research and the Delphi studies are also described. Through interviews with experts, the taxonomy was validated as an appropriate research tool to enable the identification of graphicacy use across the curriculum. These research studies identified links between design and technology and all other subject-areas studied. Similar patterns of graphicacy use were identified across 3 schools, one in Cyprus, USA and the UK. Photographs were the most commonly used graphicacy element across all subject areas studied. Design and technology within England was found to use the widest variety of graphicacy elements, providing evidence towards research objective 3; establishing how the ability to understand and create images affects students learning. Continuity and progression (CaP) descriptors were created for each area covered by this research. The success of the CaP descriptors relied on the technical complexity involved in the creation of each image. Some evidence was found concerning the limits of natural development and how nurture can further develop graphicacy skills. In addition, co-research as a methodology, its limitations and potentials are identified.

Page generated in 0.4514 seconds