• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 720
  • 715
  • 707
  • 398
  • 385
  • 382
  • 164
  • 97
  • 86
  • 82
  • 44
  • 42
  • 39
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

A domain independent adaptive imaging system for visual inspection

Panayiotou, Stephen January 1995 (has links)
Computer vision is a rapidly growing area. The range of applications is increasing very quickly, robotics, inspection, medicine, physics and document processing are all computer vision applications still in their infancy. All these applications are written with a specific task in mind and do not perform well unless there under a controlled environment. They do not deploy any knowledge to produce a meaningful description of the scene, or indeed aid in the analysis of the image. The construction of a symbolic description of a scene from a digitised image is a difficult problem. A symbolic interpretation of an image can be viewed as a mapping from the image pixels to an identification of the semantically relevant objects. Before symbolic reasoning can take place image processing and segmentation routines must produce the relevant information. This part of the imaging system inherently introduces many errors. The aim of this project is to reduce the error rate produced by such algorithms and make them adaptable to change in the manufacturing process. Thus a prior knowledge is needed about the image and the objects they contain as well as knowledge about how the image was acquired from the scene (image geometry, quality, object decomposition, lighting conditions etc,). Knowledge on algorithms must also be acquired. Such knowledge is collected by studying the algorithms and deciding in which areas of image analysis they work well in. In most existing image analysis systems, knowledge of this kind is implicitly embedded into the algorithms employed in the system. Such an approach assumes that all these parameters are invariant. However, in complex applications this may not be the case, so that adjustment must be made from time to time to ensure a satisfactory performance of the system. A system that allows for such adjustments to be made, must comprise the explicit representation of the knowledge utilised in the image analysis procedure. In addition to the use of a priori knowledge, rules are employed to improve the performance of the image processing and segmentation algorithms. These rules considerably enhance the correctness of the segmentation process. The most frequently given goal, if not the only one in industrial image analysis is to detect and locate objects of a given type in the image. That is, an image may contain objects of different types, and the goal is to identify parts of the image. The system developed here is driven by these goals, and thus by teaching the system a new object or fault in an object the system may adapt the algorithms to detect these new objects as well compromise for changes in the environment such as a change in lighting conditions. We have called this system the Visual Planner, this is due to the fact that we use techniques based on planning to achieve a given goal. As the Visual Planner learns the specific domain it is working in, appropriate algorithms are selected to segment the object. This makes the system domain independent, because different algorithms may be selected for different applications and objects under different environmental condition.
212

Efficient streaming for high fidelity imaging

McNamee, Joshua January 2017 (has links)
Researchers and practitioners of graphics, visualisation and imaging have an ever-expanding list of technologies to account for, including (but not limited to) HDR, VR, 4K, 360°, light field and wide colour gamut. As these technologies move from theory to practice, the methods of encoding and transmitting this information need to become more advanced and capable year on year, placing greater demands on latency, bandwidth, and encoding performance. High dynamic range (HDR) video is still in its infancy; the tools for capture, transmission and display of true HDR content are still restricted to professional technicians. Meanwhile, computer graphics are nowadays near-ubiquitous, but to achieve the highest fidelity in real or even reasonable time a user must be located at or near a supercomputer or other specialist workstation. These physical requirements mean that it is not always possible to demonstrate these graphics in any given place at any time, and when the graphics in question are intended to provide a virtual reality experience, the constrains on performance and latency are even tighter. This thesis presents an overall framework for adapting upcoming imaging technologies for efficient streaming, constituting novel work across three areas of imaging technology. Over the course of the thesis, high dynamic range capture, transmission and display is considered, before specifically focusing on the transmission and display of high fidelity rendered graphics, including HDR graphics. Finally, this thesis considers the technical challenges posed by incoming head-mounted displays (HMDs). In addition, a full literature review is presented across all three of these areas, detailing state-of-the-art methods for approaching all three problem sets. In the area of high dynamic range capture, transmission and display, a framework is presented and evaluated for efficient processing, streaming and encoding of high dynamic range video using general-purpose graphics processing unit (GPGPU) technologies. For remote rendering, state-of-the-art methods of augmenting a streamed graphical render are adapted to incorporate HDR video and high fidelity graphics rendering, specifically with regards to path tracing. Finally, a novel method is proposed for streaming graphics to a HMD for virtual reality (VR). This method utilises 360° projections to transmit and reproject stereo imagery to a HMD with minimal latency, with an adaptation for the rapid local production of depth maps.
213

Facial creation : using compositing to conceal identity

Shrimpton, S. L. January 2018 (has links)
This study focused on the creation of new faces by compositing features from donor face photographs together that provide a way to generate new face identities. However, does the act of compositing conceal the identity of the donor faces? Two applications of these created faces require donor face identities to remain concealed: Covert social media profiles provide a way for investigating authorities to survey online criminal activity and, as such, a false online identity, including face image, is required. Compositing features/face parts from various donor face photographs could be used to generate new face identities. Face donor photographs are also used for the ‘texturing’ of facial depictions to reconstruct an image of how a person might appear. This study investigated whether compositing unknown face features onto known familiar faces (celebrities and lecturers) was sufficient to conceal identity in a face recognition task paradigm. A first experiment manipulated individual features to establish a feature saliency hierarchy. The results of this informed the order of feature replacement for the second experiment, where features were replaced in a compound manner to establish how much of a face needs to be replaced to conceal identity. In line with previous literature, the eyes and hair were found to be highly salient, with the eyebrows and nose the least. As expected, the more features that are replaced, the less likely the face was to be recognised. A theoretical criterion point from old to new identity was found for the combined data (celebrity and lecturer) where replacing at least two features resulted in a significant decrease in recognition. Which feature was being replaced was found to have more of an effect during the middle part of feature replacement, around the criterion point, where the eyes were more important to be replaced than the mouth. Celebrities represented a higher level of familiarity and, therefore, may be a more stringent set of results for practical use, but with less power than the combined data to detect changes. This would suggest that at least three features (half the face) need to be replaced before recognition significantly decreases, especially if this includes the more salient features in the upper half of the face. However, once all six features were replaced, identity was not concealed 100% of the time, signifying that feature replacement alone was not sufficient to conceal identity. It is completely possible that residual configural and contrast information was facilitating recognition, and, therefore, it is likely that manipulations, such as these, are also needed in order to conceal identity.
214

Being sound : FLOSS, flow and event in the composition and ensemble performance of free open computer music

Brooks, Julian January 2016 (has links)
This commentary describes my recent approach to writing compositions for the ensemble performance of computer music. Drawing on experimental music and improvisation, I contend that such music is best considered in terms of people’s situated and relational interplay. The compositional and performative question that permeates this thesis is ‘what can we do, in this time and space, with these tools available to us?’. As themes of equality and egalitarian access underpin this work throughout, I highlight my engagement with Free Libre Open Source Software (FLOSS) ideology and community, reflecting on how this achieves my aims. I describe my writing of text score compositions, making use of the term bounded improvisation, whose purposeful requirements for indeterminate realisation extends most current computer-based performance practice. Though no single strand of this research is perhaps unusual by itself, such an assemblage as that outlined above (incorporating composition, computer coding and ensemble performance practice) is, when allied to an understanding of electronic and computer music praxis, currently an underdeveloped approach. Such an approach I have thus chosen to term free open computer music. I incorporate two further pre-existing conceptual formulations to present a framework for constructing, reflecting on, and developing my work in this field. Firstly flow or 'immersed experience' is useful to explicate difficult to capture aspects of instrumental engagement and ensemble performance. Secondly, this portfolio of scores aims to produce well-constructed situations, facilitating spaces of flow which contain within their environments the opportunity for an event to take place. I present the outcomes of my practice as place-forming tactics that catalyse something to do, but not what to do, in performative spaces such as those described above. Such intentions define my aims for composition. These theoretical concerns, together with an allied consideration of the underpinning themes highlighted above, is a useful framework for refection and evaluation of this work.
215

Developments to graphical modelling methods and their applications in manufacturing systems analysis and design

Colquhoun, Gary John January 1996 (has links)
No description available.
216

Computer aided process parameter selection for high speed machining

Dagiloke, I. F. January 1995 (has links)
No description available.
217

Integrating information systems technology competencies into accounting : a comparative study

Ahmed, Adel El-Said January 1999 (has links)
No description available.
218

Modeling and trading the Greek stock market with artificial intelligence models

Karathanasopoulos, Andreas January 2011 (has links)
The main motivation for this thesis is to introduce some new methodologies for the prediction of the directional movement of financial assets with an application to the ASE20 Greek stock index. Specifically, we use some alternative computational methodologies named Evolutionary Support Vector Machine (ESVM), Gene Expression programming, Genetic Programming Algorithms and 2 hybrid combinations of linear and no linear models for modeling and trading the ASE20 Greek stock index using as inputs previous values of the ASE20 index and of four other financial indices. For comparison purposes, the trading performance of the ESVM stock predictor, Gene Expression Programming, Genetic Programming Algorithms and the 2 Hybrid combination methodologies have been benchmarked with four traditional strategies (a nave strategy, a Buy and Hold strategy, a MACD and an ARMA models), and a Multilayer Pereceptron (MLP) neural network model. As it turns out, the proposed methodologies produced a higher trading performance in terms of annualized return and information ratio, while providing information about the relationship between the ASE20 index and other foreign indices.
219

On the evaluation of aggregated web search

Zhou, Ke January 2014 (has links)
Aggregating search results from a variety of heterogeneous sources or so-called verticals such as news, image and video into a single interface is a popular paradigm in web search. This search paradigm is commonly referred to as aggregated search. The heterogeneity of the information, the richer user interaction, and the more complex presentation strategy, make the evaluation of the aggregated search paradigm quite challenging. The Cranfield paradigm, use of test collections and evaluation measures to assess the effectiveness of information retrieval (IR) systems, is the de-facto standard evaluation strategy in the IR research community and it has its origins in work dating to the early 1960s. This thesis focuses on applying this evaluation paradigm to the context of aggregated web search, contributing to the long-term goal of a complete, reproducible and reliable evaluation methodology for aggregated search in the research community. The Cranfield paradigm for aggregated search consists of building a test collection and developing a set of evaluation metrics. In the context of aggregated search, a test collection should contain results from a set of verticals, some information needs relating to this task and a set of relevance assessments. The metrics proposed should utilize the information in the test collection in order to measure the performance of any aggregated search pages. The more complex user behavior of aggregated search should be reflected in the test collection through assessments and modeled in the metrics. Therefore, firstly, we aim to better understand the factors involved in determining relevance for aggregated search and subsequently build a reliable and reusable test collection for this task. By conducting several user studies to assess vertical relevance and creating a test collection by reusing existing test collections, we create a testbed with both the vertical-level (user orientation) and document-level relevance assessments. In addition, we analyze the relationship between both types of assessments and find that they are correlated in terms of measuring the system performance for the user. Secondly, by utilizing the created test collection, we aim to investigate how to model the aggregated search user in a principled way in order to propose reliable, intuitive and trustworthy evaluation metrics to measure the user experience. We start our investigations by studying solely evaluating one key component of aggregated search: vertical selection, i.e. selecting the relevant verticals. Then we propose a general utility-effort framework to evaluate the ultimate aggregated search pages. We demonstrate the fidelity (predictive power) of the proposed metrics by correlating them to the user preferences of aggregated search pages. Furthermore, we meta-evaluate the reliability and intuitiveness of a variety of metrics and show that our proposed aggregated search metrics are the most reliable and intuitive metrics, compared to adapted diversity-based and traditional IR metrics. To summarize, in this thesis, we mainly demonstrate the feasibility to apply the Cranfield Paradigm for aggregated search for reproducible, cheap, reliable and trustworthy evaluation.
220

A knowledge-based intelligent system for surface texture (virtual surf)

Wang, Yan January 2008 (has links)
The presented thesis documents the investigation and development of the mathematical foundations for a novel knowledge-based system for surface texture (VitualSurf system). This is the first time that this type of novel knowledge-based system has been tried on surface texture knowledge. It is important to realize that surface texture knowledge, based on new generation Geometrical Product Specification (GPS) system, are considered to be too theoretical, abstract, complex and over-elaborate. Also it is not easy for industry to understand and implement them efficiently in a short time. The VirtualSurf has been developed to link surface function, specification through manufacture and verification, and provide a universal platform for engineers in industry, making it easier for them to understand and use the latest surface texture knowledge. The intelligent knowledge-base should be capable of incorporating knowledge from multiple sources (standards, books, experts, etc), adding new knowledge from these sources and still remain a coherent reliable system. In this research, an object-relationship data model is developed to represent surface texture knowledge. The object-relationship data model generalises the relational and object orientated data models. It has both the flexibility of structures for entities and also good mathematical foundations, based on category theory, that ensures the knowledge-base remains a coherent and reliable system as new knowledge is added. This prototype system leaves much potential for further work. Based on the framework and data models developed in this thesis, the system will be developed into implemental software, either acting as a good training tool for new and less experienced engineers or further connecting with other analysis software, CAD software (design), surface instrument software (measurement) etc, and finally applied in production industries.

Page generated in 0.0334 seconds