• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • Tagged with
  • 258
  • 258
  • 258
  • 222
  • 221
  • 219
  • 48
  • 26
  • 18
  • 18
  • 17
  • 17
  • 16
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

A method for ontology and knowledge-base assisted text mining for diabetes discussion forum

Issa, Ahmad January 2015 (has links)
Social media offers researchers vast amount of unstructured text as a source to discover hidden knowledge and insights. However, social media poses new challenges to text mining and knowledge discovery due to its short length, temporal nature and informal language. In order to identify the main requirements for analysing unstructured text in social media, this research takes a case study of a large discussion forum in the diabetes domain. It then reviews and evaluates existing text mining methods for the requirements to analyse such a domain. Using domain background knowledge to bridge the semantic gap in traditional text mining methods was identified as a key requirement for analysing text in discussion forums. Existing ontology engineering methodologies encounter difficulties in deriving suitable domain knowledge with the appropriate breadth and depth in domain-specific concepts with a rich relationships structure. These limitations usually originate from a reliance on human domain experts. This research developed a novel semantic text mining method. It can identify the concepts and topics being discussed, the strength of the relationships between them and then display the emergent knowledge from a discussion forum. The derived method has a modular design that consists of three main components: The Ontology building Process, Semantic Annotation and Topic Identification, and Visualisation Tools. The ontology building process generates domain ontology quickly with little need for domain experts. The topic identification component utilises a hybrid system of domain ontology and a general knowledge base for text enrichment and annotation, while the visualisation methods of dynamic tag clouds and cooccurrence network for pattern discovery enable a flexible visualisation of these results and can help uncover hidden knowledge. Application of the derived text mining method within the case study helped identify trending topics in the forum and how they change over time. The derived method performed better in semantic annotation of the text compared to the other systems evaluated. The new text mining method appears to be “generalisable” to other domains than diabetes. Future study needs to confirm this ability and to evaluate its applicability to other types of social media text sources.
252

Taming web data : exploiting linked data for integrating medical educational content

Qadan Al Fayez, Reem Ali January 2016 (has links)
Open data are playing a vital role in different communities, including governments, businesses, and education. This revolution has had a high impact on the education field. Recently, new practices are being adopted for publishing and connecting data on the web, known as "Linked Data", and these are used to expose and connect data which were not previously linked. In the context of education, applying Linked Data practices to the growing amount of open data used for learning is potentially highly beneficial. The work presented in this thesis tackles the challenges of data acquisition and integration from distributed web data sources into one linked dataset. The application domain of this thesis is medical education, and the focus is on bridging the gap between articles published in online educational libraries and content published on Web 2.0 platforms that can be used for education. The integration of a collection of heterogeneous resources is to create links between data collected from distributed web data sources. To address these challenges, a system is proposed that exploits the Linked Data for building a metadata schema in XML/RDF format for describing resources and enriching it with external dataset that adds semantic to its metadata. The proposed system collects resources from distributed data sources on the web and enriches their metadata with concepts from biomedical ontologies, such as SNOMED CT, that enable its linking. The final result of building this system is a linked dataset of more than 10,000 resources collected from PubMed Library, YouTube channels, and Blogging platforms. The effectiveness of the system proposed is evaluated by validating the content of the linked dataset when accessed and retrieved. Ontology-based techniques have been developed for browsing and querying the linked dataset resulting from the system proposed. Experiments have been conducted to simulate users' access to the linked dataset and validate its content. The results were promising and have shown the effectiveness of using SNOMED CT for integrating distributed resources from diverse web data sources.
253

Interactive global illumination on the CPU

Dubla, Piotr January 2012 (has links)
Computing realistic physically-based global illumination in real-time remains one of the major goals in the fields of rendering and visualisation; one that has not yet been achieved due to its inherent computational complexity. This thesis focuses on CPU-based interactive global illumination approaches with an aim to develop generalisable hardware-agnostic algorithms. Interactive ray tracing is reliant on spatial and cache coherency to achieve interactive rates which conflicts with needs of global illumination solutions which require a large number of incoherent secondary rays to be computed. Methods that reduce the total number of rays that need to be processed, such as Selective rendering, were investigated to determine how best they can be utilised. The impact that selective rendering has on interactive ray tracing was analysed and quantified and two novel global illumination algorithms were developed, with the structured methodology used presented as a framework. Adaptive Inter- leaved Sampling, is a generalisable approach that combines interleaved sampling with an adaptive approach, which uses efficient component-specific adaptive guidance methods to drive the computation. Results of up to 11 frames per second were demonstrated for multiple components including participating media. Temporal Instant Caching, is a caching scheme for accelerating the computation of diffuse interreflections to interactive rates. This approach achieved frame rates exceeding 9 frames per second for the majority of scenes. Validation of the results for both approaches showed little perceptual difference when comparing against a gold-standard path-traced image. Further research into caching led to the development of a new wait-free data access control mechanism for sharing the irradiance cache among multiple rendering threads on a shared memory parallel system. By not serialising accesses to the shared data structure the irradiance values were shared among all the threads without any overhead or contention, when reading and writing simultaneously. This new approach achieved efficiencies between 77% and 92% for 8 threads when calculating static images and animations. This work demonstrates that, due to the flexibility of the CPU, CPU-based algorithms remain a valid and competitive choice for achieving global illumination interactively, and an alternative to the generally brute-force GPU-centric algorithms.
254

An investigation of model-based techniques for automotive electronic system development

Guo, Yue January 2009 (has links)
Over the past decades, the adoption of electronic systems for the manufacturing of automotive vehicles has been exponentially popularized. This growth has been driven by the premium automobile sector where, presently, diverse electronic systems are used. These electronic systems include systems that control the engine, transmission, suspension and handling of a vehicle; air bag and other advanced restraint systems; comfort systems; security systems; entertainment and information (infotainment) systems. In systems terms, automotive embedded electronic systems can now be classified as a System of Systems (SoS). Automotive systems engineering requires a sustainable integration of new methods, development processes, and tools that are specifically adapted to the automotive domain. Model-based design is one potential methodology to carry out design, implement and manage such complex distributed systems, and their integration into one cohesive and reliable SoS to meet the challenges for the automotive industry. This research was conducted to investigate the model-based design of a 4×4 Information System, within an automotive electronic SoS. Two distinct model-based approaches to the development of an automotive electronic system are discussed in this study. The first approach involves the use of the Systems Modelling Language (SysML) based tool ARTiSAN Studio for structural modelling, functional modelling and code generation. The second approach involves the use of the MATLAB based tools Simulink and Stateflow for functional modelling, and code generation. The results show that building the model in SysML by using ARTiSAN Studio provides a clearly structured visualization of the 4×4 Information System from both structural and behavioural viewpoints of the system with relevant objects. SysML model facilitates a more comprehensive understanding of the system than the model built in Simulink/Stateflow. The Simulink/Stateflow model demonstrates its superior performance in producing high quality and better efficiency of C code for the automotive software delivery compared with the model built in ARTiSAN Studio. Furthermore, this Thesis also gets insight into an advanced function development approach based on the real-time simulation and animation for the 4×4 Information System. Finally, the Thesis draws conclusions about how to make use of model-based design for the development of an automotive electronic SoS.
255

Semantic labelling of road scenes using supervised and unsupervised machine learning with lidar-stereo sensor fusion

Osgood, Thomas J. January 2013 (has links)
At the highest level the aim of this thesis is to review and develop reliable and efficient algorithms for classifying road scenery primarily using vision based technology mounted on vehicles. The purpose of this technology is to enhance vehicle safety systems in order to prevent accidents which cause injuries to drivers and pedestrians. This thesis uses LIDAR–stereo sensor fusion to analyse the scene in the path of the vehicle and apply semantic labels to the different content types within the images. It details every step of the process from raw sensor data to automatically labelled images. At each stage of the process currently used methods are investigated and evaluated. In cases where existingmethods do not produce satisfactory results improvedmethods have been suggested. In particular, this thesis presents a novel, automated,method for aligning LIDAR data to the stereo camera frame without the need for specialised alignment grids. For image segmentation a hybrid approach is presented, combining the strengths of both edge detection and mean-shift segmentation. For texture analysis the presented method uses GLCM metrics which allows texture information to be captured and summarised using only four feature descriptors compared to the 100’s produced by SURF descriptors. In addition to texture descriptors, the ìD information provided by the stereo system is also exploited. The segmented point cloud is used to determine orientation and curvature using polynomial surface fitting, a technique not yet applied to this application. Regarding classification methods a comprehensive study was carried out comparing the performance of the SVM and neural network algorithms for this particular application. The outcome shows that for this particular set of learning features the SVM classifiers offer slightly better performance in the context of image and depth based classification which was not made clear in existing literature. Finally a novel method of making unsupervised classifications is presented. Segments are automatically grouped into sub-classes which can then be mapped to more expressive super-classes as needed. Although the method in its current state does not yet match the performance of supervised methods it does produce usable classification results without the need for any training data. In addition, the method can be used to automatically sub-class classes with significant inter-class variation into more specialised groups prior to being used as training targets in a supervised method.
256

Continuous path : the evolution of process control technologies in post-war Britain

Hamilton, Ross January 1997 (has links)
Automation - the alliance of a series of advances in manufacturing technology with the academic discipline of cybernetics - was the centre of both popular and technical debate for a number of years in the mid-1950s. Alarmists predicted social disruption, economic hardship, and a massive de-skilling of the workforce; while technological positivists saw automation as an enabling technology that would introduce a new age of prosperity. At the same time as this debate was taking place, increasingly sophisticated control technologies based on digital electronics and the principle of feedback control were being developed and applied to industrial manufacturing systems. This thesis examines two stages in the evolution of process control technology: the numerical control of machine tools; and the development of the small computer, or minicomputer. In each case two key themes are explored: the notion of industrial failure; and the role of new technologies in Britain's industrial decline. In Britain, four projects were undertaken to develop point-to-point or continuous path automatic controllers for machine tools in the mid-1950s - three by electronics firms and one by a traditional machine tool manufacturer. However, although automation was dominating popular debate at the time, the anticipated market for numerically controlled systems failed to appear, and all of the early projects were abandoned. It is argued that while the electronics firms naively misdirected their limited marketing capabilities, the root of the problem was the traditional machine tool manufacturers' conservatism and their failure to embrace the new technology. A decade later, small computers based on new semiconductor technologies had emerged in the United States. Originally developed for roles in industrial automation, they soon began to compete at the low end of the mainframe computer market. Soon afterwards a number of British firms - electronic goods manufacturers, entrepreneurial start-ups, and even office machinery suppliers - began to develop minicomputers. The Wilson government saw computers as a central element of industrial modernisation, and thus a part of its solution to Britain's economic decline, so the Ministry of Technology was charged with the promotion of the British minicomputer industry. However, US-built systems proved more competitive, and by the mid-1970s they had come to dominate the market, with the few remaining British firms relegated as niche players. It is argued that government involvement in the minicomputer industry was ineffectual, and that the minicomputer manufacturers' organisational cultures played a major role in the failure of the British industry.
257

An investigation of modular dependencies in aspects, features and classes

Yang, Shoushen. January 2007 (has links)
Thesis (M.S.) -- Worcester Polytechnic Institute. / Keywords: conflict; precedence; dependency; feature-oriented programming; object-oriented design; aspect-oriented programming. Includes bibliographical references (p.76-78).
258

Java prototype of hypercard bibliography past implementation and present choices

Reddy, Neeta 01 January 2002 (has links)
This project was started with JDK Version 1.0 and was later upgraded to version JDK Version 1.2.2, to create a graphical user interface using the Abstract Window Toolkit (AWT) to a HyperCard bibliography of software engineering. The bibliographic index tool is designed to facilitate searching for text and is run as a Java applet. It presents an alphabetically ordered list of author names and subjects. With the bibliography index tool one can manipulate a bibliographic list directly over the World Wide Web on a computer that lists electronic bibliographies.

Page generated in 0.0906 seconds