Spelling suggestions: "subject:"1776 computer software"" "subject:"1776 coomputer software""
201 |
A domain independent adaptive imaging system for visual inspectionPanayiotou, Stephen January 1995 (has links)
Computer vision is a rapidly growing area. The range of applications is increasing very quickly, robotics, inspection, medicine, physics and document processing are all computer vision applications still in their infancy. All these applications are written with a specific task in mind and do not perform well unless there under a controlled environment. They do not deploy any knowledge to produce a meaningful description of the scene, or indeed aid in the analysis of the image. The construction of a symbolic description of a scene from a digitised image is a difficult problem. A symbolic interpretation of an image can be viewed as a mapping from the image pixels to an identification of the semantically relevant objects. Before symbolic reasoning can take place image processing and segmentation routines must produce the relevant information. This part of the imaging system inherently introduces many errors. The aim of this project is to reduce the error rate produced by such algorithms and make them adaptable to change in the manufacturing process. Thus a prior knowledge is needed about the image and the objects they contain as well as knowledge about how the image was acquired from the scene (image geometry, quality, object decomposition, lighting conditions etc,). Knowledge on algorithms must also be acquired. Such knowledge is collected by studying the algorithms and deciding in which areas of image analysis they work well in. In most existing image analysis systems, knowledge of this kind is implicitly embedded into the algorithms employed in the system. Such an approach assumes that all these parameters are invariant. However, in complex applications this may not be the case, so that adjustment must be made from time to time to ensure a satisfactory performance of the system. A system that allows for such adjustments to be made, must comprise the explicit representation of the knowledge utilised in the image analysis procedure. In addition to the use of a priori knowledge, rules are employed to improve the performance of the image processing and segmentation algorithms. These rules considerably enhance the correctness of the segmentation process. The most frequently given goal, if not the only one in industrial image analysis is to detect and locate objects of a given type in the image. That is, an image may contain objects of different types, and the goal is to identify parts of the image. The system developed here is driven by these goals, and thus by teaching the system a new object or fault in an object the system may adapt the algorithms to detect these new objects as well compromise for changes in the environment such as a change in lighting conditions. We have called this system the Visual Planner, this is due to the fact that we use techniques based on planning to achieve a given goal. As the Visual Planner learns the specific domain it is working in, appropriate algorithms are selected to segment the object. This makes the system domain independent, because different algorithms may be selected for different applications and objects under different environmental condition.
|
202 |
Facial creation : using compositing to conceal identityShrimpton, S. L. January 2018 (has links)
This study focused on the creation of new faces by compositing features from donor face photographs together that provide a way to generate new face identities. However, does the act of compositing conceal the identity of the donor faces? Two applications of these created faces require donor face identities to remain concealed: Covert social media profiles provide a way for investigating authorities to survey online criminal activity and, as such, a false online identity, including face image, is required. Compositing features/face parts from various donor face photographs could be used to generate new face identities. Face donor photographs are also used for the ‘texturing’ of facial depictions to reconstruct an image of how a person might appear. This study investigated whether compositing unknown face features onto known familiar faces (celebrities and lecturers) was sufficient to conceal identity in a face recognition task paradigm. A first experiment manipulated individual features to establish a feature saliency hierarchy. The results of this informed the order of feature replacement for the second experiment, where features were replaced in a compound manner to establish how much of a face needs to be replaced to conceal identity. In line with previous literature, the eyes and hair were found to be highly salient, with the eyebrows and nose the least. As expected, the more features that are replaced, the less likely the face was to be recognised. A theoretical criterion point from old to new identity was found for the combined data (celebrity and lecturer) where replacing at least two features resulted in a significant decrease in recognition. Which feature was being replaced was found to have more of an effect during the middle part of feature replacement, around the criterion point, where the eyes were more important to be replaced than the mouth. Celebrities represented a higher level of familiarity and, therefore, may be a more stringent set of results for practical use, but with less power than the combined data to detect changes. This would suggest that at least three features (half the face) need to be replaced before recognition significantly decreases, especially if this includes the more salient features in the upper half of the face. However, once all six features were replaced, identity was not concealed 100% of the time, signifying that feature replacement alone was not sufficient to conceal identity. It is completely possible that residual configural and contrast information was facilitating recognition, and, therefore, it is likely that manipulations, such as these, are also needed in order to conceal identity.
|
203 |
Being sound : FLOSS, flow and event in the composition and ensemble performance of free open computer musicBrooks, Julian January 2016 (has links)
This commentary describes my recent approach to writing compositions for the ensemble performance of computer music. Drawing on experimental music and improvisation, I contend that such music is best considered in terms of people’s situated and relational interplay. The compositional and performative question that permeates this thesis is ‘what can we do, in this time and space, with these tools available to us?’. As themes of equality and egalitarian access underpin this work throughout, I highlight my engagement with Free Libre Open Source Software (FLOSS) ideology and community, reflecting on how this achieves my aims. I describe my writing of text score compositions, making use of the term bounded improvisation, whose purposeful requirements for indeterminate realisation extends most current computer-based performance practice. Though no single strand of this research is perhaps unusual by itself, such an assemblage as that outlined above (incorporating composition, computer coding and ensemble performance practice) is, when allied to an understanding of electronic and computer music praxis, currently an underdeveloped approach. Such an approach I have thus chosen to term free open computer music. I incorporate two further pre-existing conceptual formulations to present a framework for constructing, reflecting on, and developing my work in this field. Firstly flow or 'immersed experience' is useful to explicate difficult to capture aspects of instrumental engagement and ensemble performance. Secondly, this portfolio of scores aims to produce well-constructed situations, facilitating spaces of flow which contain within their environments the opportunity for an event to take place. I present the outcomes of my practice as place-forming tactics that catalyse something to do, but not what to do, in performative spaces such as those described above. Such intentions define my aims for composition. These theoretical concerns, together with an allied consideration of the underpinning themes highlighted above, is a useful framework for refection and evaluation of this work.
|
204 |
Developments to graphical modelling methods and their applications in manufacturing systems analysis and designColquhoun, Gary John January 1996 (has links)
No description available.
|
205 |
Computer aided process parameter selection for high speed machiningDagiloke, I. F. January 1995 (has links)
No description available.
|
206 |
Integrating information systems technology competencies into accounting : a comparative studyAhmed, Adel El-Said January 1999 (has links)
No description available.
|
207 |
Modeling and trading the Greek stock market with artificial intelligence modelsKarathanasopoulos, Andreas January 2011 (has links)
The main motivation for this thesis is to introduce some new methodologies for the prediction of the directional movement of financial assets with an application to the ASE20 Greek stock index. Specifically, we use some alternative computational methodologies named Evolutionary Support Vector Machine (ESVM), Gene Expression programming, Genetic Programming Algorithms and 2 hybrid combinations of linear and no linear models for modeling and trading the ASE20 Greek stock index using as inputs previous values of the ASE20 index and of four other financial indices. For comparison purposes, the trading performance of the ESVM stock predictor, Gene Expression Programming, Genetic Programming Algorithms and the 2 Hybrid combination methodologies have been benchmarked with four traditional strategies (a nave strategy, a Buy and Hold strategy, a MACD and an ARMA models), and a Multilayer Pereceptron (MLP) neural network model. As it turns out, the proposed methodologies produced a higher trading performance in terms of annualized return and information ratio, while providing information about the relationship between the ASE20 index and other foreign indices.
|
208 |
On the evaluation of aggregated web searchZhou, Ke January 2014 (has links)
Aggregating search results from a variety of heterogeneous sources or so-called verticals such as news, image and video into a single interface is a popular paradigm in web search. This search paradigm is commonly referred to as aggregated search. The heterogeneity of the information, the richer user interaction, and the more complex presentation strategy, make the evaluation of the aggregated search paradigm quite challenging. The Cranfield paradigm, use of test collections and evaluation measures to assess the effectiveness of information retrieval (IR) systems, is the de-facto standard evaluation strategy in the IR research community and it has its origins in work dating to the early 1960s. This thesis focuses on applying this evaluation paradigm to the context of aggregated web search, contributing to the long-term goal of a complete, reproducible and reliable evaluation methodology for aggregated search in the research community. The Cranfield paradigm for aggregated search consists of building a test collection and developing a set of evaluation metrics. In the context of aggregated search, a test collection should contain results from a set of verticals, some information needs relating to this task and a set of relevance assessments. The metrics proposed should utilize the information in the test collection in order to measure the performance of any aggregated search pages. The more complex user behavior of aggregated search should be reflected in the test collection through assessments and modeled in the metrics. Therefore, firstly, we aim to better understand the factors involved in determining relevance for aggregated search and subsequently build a reliable and reusable test collection for this task. By conducting several user studies to assess vertical relevance and creating a test collection by reusing existing test collections, we create a testbed with both the vertical-level (user orientation) and document-level relevance assessments. In addition, we analyze the relationship between both types of assessments and find that they are correlated in terms of measuring the system performance for the user. Secondly, by utilizing the created test collection, we aim to investigate how to model the aggregated search user in a principled way in order to propose reliable, intuitive and trustworthy evaluation metrics to measure the user experience. We start our investigations by studying solely evaluating one key component of aggregated search: vertical selection, i.e. selecting the relevant verticals. Then we propose a general utility-effort framework to evaluate the ultimate aggregated search pages. We demonstrate the fidelity (predictive power) of the proposed metrics by correlating them to the user preferences of aggregated search pages. Furthermore, we meta-evaluate the reliability and intuitiveness of a variety of metrics and show that our proposed aggregated search metrics are the most reliable and intuitive metrics, compared to adapted diversity-based and traditional IR metrics. To summarize, in this thesis, we mainly demonstrate the feasibility to apply the Cranfield Paradigm for aggregated search for reproducible, cheap, reliable and trustworthy evaluation.
|
209 |
A knowledge-based intelligent system for surface texture (virtual surf)Wang, Yan January 2008 (has links)
The presented thesis documents the investigation and development of the mathematical foundations for a novel knowledge-based system for surface texture (VitualSurf system). This is the first time that this type of novel knowledge-based system has been tried on surface texture knowledge. It is important to realize that surface texture knowledge, based on new generation Geometrical Product Specification (GPS) system, are considered to be too theoretical, abstract, complex and over-elaborate. Also it is not easy for industry to understand and implement them efficiently in a short time. The VirtualSurf has been developed to link surface function, specification through manufacture and verification, and provide a universal platform for engineers in industry, making it easier for them to understand and use the latest surface texture knowledge. The intelligent knowledge-base should be capable of incorporating knowledge from multiple sources (standards, books, experts, etc), adding new knowledge from these sources and still remain a coherent reliable system. In this research, an object-relationship data model is developed to represent surface texture knowledge. The object-relationship data model generalises the relational and object orientated data models. It has both the flexibility of structures for entities and also good mathematical foundations, based on category theory, that ensures the knowledge-base remains a coherent and reliable system as new knowledge is added. This prototype system leaves much potential for further work. Based on the framework and data models developed in this thesis, the system will be developed into implemental software, either acting as a good training tool for new and less experienced engineers or further connecting with other analysis software, CAD software (design), surface instrument software (measurement) etc, and finally applied in production industries.
|
210 |
Softgauges for surface textureLi, Tukun January 2011 (has links)
Surface texture plays an important role in the specification of a precision workpiece. However, the route of traceability for surface texture measurements is not well developed. One of the main technical obstacles is the lack of tools to check traceability of the software of surface measuring instruments and to estimate uncertainty contributed by the software. To this end, the concept of softgauges (i.e. software measurement standards) for surface texture has been introduced into the international standards. The presented thesis documents the realisation of softgauges for surface texture, which is a part of the National Measurement System in the UK. These standards, in the form of the reference dataset with reference results, have been developed by both simulation and experimental methods. The analysis of software uncertainty has been undertaken. These measurement standards have been used to verify both reference software(developed by the National Measurement Institutes) and commercial packages (developed by instrument manufacturers). In addition, the evaluation of the measurement uncertainty in workshop level has been carried on. These developed standards provided a novel route to demonstrate metrological traceability of most surface profile parameters. Currently, these standards are distributed via the internet by the National Measurement Laboratory (NPL) in the UK. These standards are also recognised by NIST in the USA and PTB in Germany, and these organisations would also provide a suitable vehicle to distribute of the results of this study.
|
Page generated in 1.8734 seconds