• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1651
  • 1271
  • 126
  • 50
  • 2
  • 2
  • 1
  • Tagged with
  • 3846
  • 3013
  • 1656
  • 1628
  • 1628
  • 1113
  • 955
  • 864
  • 827
  • 794
  • 706
  • 699
  • 622
  • 608
  • 595
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

The value of different methods of obtaining information in colliery control centre

Raynard, John Charles January 1972 (has links)
No description available.
222

A decision model for e-procurement decision support systems for the public sector using multi-criteria decision analysis

Adil, Mohamed January 2015 (has links)
This PhD research aims to identify, analyse and evaluate a decision model for an e-procurement Decision Support System (DSS) for the public sector in Maldives, especially focusing on the Education Sector. The DSS uses Multi-Criteria Decision Analysis (MCDA) to evaluate procurement alternatives. The features and characteristics of public sector procurement are based on major public sector principles, such as non-discrimination, equality, transparency and proportionality. This results in an organised step-by-step procedure for public sector procurement. However, this research focuses only on decisions that are based on the performances of the suppliers against a pre-set list of criteria where MCDA is applied to the evaluation. This research studied the applicability of a comprehensive set of published MCDA methods identified in the literature to the problem context. The MCDA methods used in this research involves linear weighting methods, single synthesising criterion or utility theory, outranking methods, fuzzy methods and mixed methods. This research adopted the Design Science Research (DSR) methodology, which is intended to design an artefact. The artefact in this case is the decision model. The methodology provides the artefact, explains how to use it, and how to evaluate the artefact. As these three components are of prime importance for the research project, DSR is chosen. The methodology follows a set of specific guidelines provided by Information Systems (IS) research scholars for such IS research projects. To support the process steps of the research project, literature reviews of public sector procurement and MCDA were undertaken, field research of focus groups was carried out, and selected documented data on procurement evaluations were collected for performance analysis of the MCDA methods in context. The first part of the literature review provided the requirements and constraints of the public sector procurement in general and specifically in relation to Maldivian public sector. The second part of the literature review identified MCDA methods and their procedures and characteristics. The focus group discussions were conducted with public sector procurement evaluation officials of selected education institutions, to identify operational constraints and requirements of the procurement. Criteria-based evaluation was conducted on the characteristics of MCDA methods compared to the public sector requirements gathered through literature review and focus groups. This analysis was to identify the applicable MCDA methods based on public sector requirements. The analysis filtered only two applicable methods namely, TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) and COPRAS (COmplex PRoportional ASsessment). Finally, performance analysis was done on the two methods by applying real life procurement data collected from selected public sector institutions. Congruence/incongruence analysis, variance analysis, stability analysis and MCDA were performed based on the results of the two methods, with real life data. The performance analysis shows TOPSIS having higher variance and stability over COPRAS. However, congruence/incongruence analysis was inconclusive. Based on the results of criteria-based evaluation and performance analysis, MCDA was applied on TOPSIS and COPRAS. The current public sector procurement evaluation method, weighted sum and the two filtered methods are used for MCDA on TOPSIS and COPRAS. The MCDA also resulted in favour of TOPSIS. Therefore, based on this research, the recommended decision model for the public sector e-procurement DSS for the Maldivian context is TOPSIS. The major research outputs are the identification of public sector requirements in context, the characteristics of the majority of MCDA methods in context, and strengths of performance of TOPSIS and COPRAS. In addition, the research identified the suitable decision model for the context, a theory of use of it in the context of the Public Sector of the Maldives and a framework to identify and evaluate the decision model.
223

Target identification of anti-prion compounds

Valencia, J. January 2014 (has links)
No description available.
224

An identification of decision-making factors in post-implementation development of e-government projects in the UK : a single case study of Sheffield City Council

Mojtahed, Reza January 2015 (has links)
No description available.
225

Internet based molecular collaborative and publishing tools

Casher, Omer January 2010 (has links)
The scientific electronic publishing model has hitherto been an Internet based delivery of electronic articles that are essentially replicas of their paper counterparts. They contain little in the way of added semantics that may better expose the science, assist the peer review process and facilitate follow on collaborations, even though the enabling technologies have been around for some time and are mature. This thesis will examine the evolution of chemical electronic publishing over the past 15 years. It will illustrate, which the help of two frameworks, how publishers should be exploiting technologies to improve the semantics of chemical journal articles, namely their value added features and relationships with other chemical resources on the Web. The first framework is an early exemplar of structured and scalable electronic publishing where a Web content management system and a molecular database are integrated. It employs a test bed of articles from several RSC journals and supporting molecular coordinate and connectivity information. The value of converting 3D molecular expressions in chemical file formats, such as the MOL file, into more generic 3D graphics formats, such as Web3D, is assessed. This exemplar highlights the use of metadata management for bidirectional hyperlink maintenance in electronic publishing. The second framework repurposes this metadata management concept into a Semantic Web application called SemanticEye. SemanticEye demonstrates how relationships between chemical electronic articles and other chemical resources are established. It adapts the successful semantic model used for digital music metadata management by popular applications such as iTunes. Globally unique identifiers enable relationships to be established between articles and other resources on the Web and SemanticEye implements two: the Document Object Identifier (DOI) for articles and the IUPAC International Chemical Identifier (InChI) for molecules. SemanticEye’s potential as a framework for seeding collaborations between researchers, who have hitherto never met, is explored using FOAF, the friend-of-a-friend Semantic Web standard for social networks.
226

Broadcast news processing: Structural Classification, Summarisation and Evaluation

Kolluru, BalaKrishna January 2006 (has links)
This thesis describes the automation and evaluation of structural classification and summarisation of audio documents, specifically broadcast news programmes. News broadcasts are typically 30-minute episodes consisting of several stories describing various events, incidents and current affairs. Some of these news stories are annotated to train the statistical models. Structural classification techniques use speaker-role (eg. anchor, reporter etc) information to categorise these stories into different broad classes such as reader and interview. A few carefully drafted set of rules assign a specific speaker-role to each utterance, which are subsequently used to classify the news stories. It is argued in this thesis that selecting the most relevant subsentence linguistic components is ari efficient information gathering mechanism for summarisation. Short to intermediate sized (15 to 50 word) summaries are automatically generated by employing an iterative decremental refining process that first decomposes a story into sentences and then further divides them into chunks or phrases. The most relevant parts are retained at each iteration until the desired number of words is reached. These chunks are then joined using a set of junction words which are decided by a combination of language model and probabilistic parser scores to generate a fluent summary. The performance of this approach is measured using a novel bipartite evaluation mechanism. It is shown that the summaries need to be measured for informativeness and therefore an approach based on a comprehension test is employed to calculate such scores. The evaluation mechanism uses afiuency scale which is based on comprehensibility and coherence to quantify the fluency of summaries. In experiments, human-authored summaries were analysed to quantify the subjectivity using the comprehension test. Experimental results indicate that the iterative refining approach is a lot more informative than a baseline constructed from first sentence or the 50 words of a news story. The results indicate that the use ofjunction words improved fluency in the summaries.
227

Supporting user query reformulation and searching: A concept hierarchy approach

Joho, Hideo January 2007 (has links)
While technological advances have enabled us to access extensive document collections, formulating a query which is well designed for an information retrieval (IR) system remains a difficult task. A number of methods have been developed to support user query formulation and reformulation based on terminological feedback. Terminological feedback offers a set of terms that can be used to modify an existing query. This also gives users an opportunity to transform a part of the query reiformulation process to term selection: a potentially simpler task. There is, however, much room for investigating and improving the interaction between users and IR systems with regard to query reiformulation. The limited context and structure of suggested terms are just some of the problems found with existing methods. This thesis presents a new approach to supporting user query reformulation and searching. The approach is based on hierarchical organisation of terms, which is dynamically derived from a set of retrieved documents. This thesis investigates both statistical and lexical aspects of terms as a means of deriving a hierarchy from texts. As a summative evaluation of our approach, a user study is carried out to investigate several aspects of the user interaction with our support system. A search interface is developed to integrate a visualised hierarchy into a search result of an IR system. Two types of hierarchies are evaluated based on a TREe test collection, and compared to a baseline that has no hierarchy. Results suggest that multiple aspects of information searching process can be supported by the hierarchies. In particular, the range of search vocabulary employed to complete a task is shown to increase, and browsing of retrieved documents is found to be facilitated by the hierarchies.
228

The Prediction of Molecular Properties Using Similarity Searching and Free-Wilson Analysis

Patel, Yogendra January 2008 (has links)
The overall aim of this thesis is to predict biological properties of molecules. The thesis first reports on the use of similarity searching for property prediction. The predictions were made by taking the value of a compound's k-nearest neighbours found from a similarity search. The initial work used structural descriptors, followed by a compound's property values (e.g. activity values across several different targets) as descriptors. The use of property value descriptors instead of classical structural descriptors showed promising results for molecular property prediction, but due to the datasets available a concrete conclusion could not be made about this technique. The use of Turbo Similarity Searching (TSS) was then investigated with the use of k-nearest neighbour predictions based on structural descriptors. . The second part of the thesis investigated the use of Free-Wilson Analysis (FWA) in conjunction with lead-optimisation and library design. It was shown that datasets can be classified into three classes: those which are successful with respect to FWA; those which are not; and those which are partially successful. For the partially successful cases it was demonstrated that it is possible to identify R-groups which do not have an independent contribution to the property being investigated. It was also found that 30% of the compounds in a full combinatorial library are sufficient to generate a successful model. Ranking the R-groups at a position on a scaffold according to their property contributions (for several different properties) can be used to generate an R-grollp profile for the R-groups, as long as a FWA is successful for the properties being considered.
229

The Development of a Programme of User Education at Chalmers University of Technology Library

Fjällbrant, N. January 1976 (has links)
No description available.
230

Designing websites to meet older people's information needs

Barrett, Julia January 2010 (has links)
No description available.

Page generated in 0.0475 seconds