• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 47
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The Ahern Committee and the education policy-making process in Queensland

Scott, Ann. Unknown Date (has links)
No description available.
12

The Ahern Committee and the education policy-making process in Queensland

Scott, Ann. Unknown Date (has links)
No description available.
13

Network Sockets, Threading or select for multiple concurrent connections

Franzén, Nicklas January 2008 (has links)
The purpose of the thesis is to present a foundation for selecting an appropriate model while building a concurrent network server, focusing on a comparison between a select() based server and one thread for each connection. The test conducted herein is based two echo servers ( the message sent is echoed back to the sender ) and the time they take to serve a number of clients. The programs written for it are run on both Windows and Linux to show if the choice of platform affects the methods efficiency. Also looking at the return time of select() when we have a number of sockets, as well the time it takes to create a set number of threads. The conclusion drawn in this thesis is that for up to 512 sockets there is really no significant difference in the time it takes for the test program to return, this was true for both the Windows and the Linux test. Note however that the threading implementation requires much more memory than the select based one. So in the end it is basically up to personal preferences. / E-Mail: Nicklas.Franzen@gmail.com Mobile: 0703-506904
14

Managing innovation search and select in disrupting environments

Russell, William Edward January 2016 (has links)
This thesis explores how organisations manage new product development (NPD) focused innovation across a portfolio of core, adjacent and breakthrough environments. The study focuses on the search and select phases of the innovation process, and how incumbents identify and validate a range of opportunities. Organisations face the paradox of how to establish search and select routines for focal markets, while also setting up routines to sense and respond to disruptive innovation signals from adjacent and more peripheral environments. The study builds on research into peripheral vision, and considers how organisations operationalise innovation search and select in disrupting environments. To analyse how organisations manage search and select in turbulent environments, the author conducted research in the disrupting higher education (HE) publishing industry using qualitative research methods. The study focused on ten case companies, and the researcher conducted 61 interviews with 63 individuals over a six month period across ten companies publishing 9,000 out of the world’s 32,000 academic journals. The interviewees ranged from CEOs and CTOs to production, operations, editorial, publishing, sales and marketing directors and managers. The analysis revealed 11 search and select capabilities that need to be in place to manage NPD effectively in HE publishing. The research identified five contextual factors that influence how search and select is operationalised in disrupting environments. A framework is proposed to enable the mapping of individual opportunities within a wider NPD portfolio. The project identified ten key market insight areas where firms in the HE publishing sector need to focus. The findings have implications for practice, especially for HE publishers, online media companies, and business to business service organisations. Further research is proposed into how the cognitive frames of boards and senior teams affect the structure and operationalisation of NPD portfolios; how visual media companies search for, develop (ideate) and select programme and film projects in the disrupting media sector; and how workflow mapping and the identification of jobs-to-be-done is deployed within the NPD process in different settings.
15

A Conductor's Guide to Select Choral Works of Eurico Carrapatoso

Lourenço, Paulo V. 10 October 2014 (has links)
No description available.
16

Intelligence Oversight Mechanism Used by Congress Study¡ÐCompare with U.S. Congress and Taiwan's Legislative Yuan

Su, Lung-Chi 10 August 2004 (has links)
Abstract This thesis focuses on the oversight mechanism used by congress to supervise the intelligence department, mainly through examining the historical development of the oversight mechanism that the U.S. Congress uses over the Central Intelligence Agency, CIA, as well as evaluating the mechanism¡¦s successes and failures, in order to find a suitable direction for establishing an oversight mechanism for our country¡¦s Legislative Yuan over the National Security Agency, NSA. First of all, the inceptive backgrounds and historical developments of the CIA and the NSA are introduced. After establishing an understanding of the special backgrounds and developments of the two agencies, the writer, using the Institutional Process Theory, analyzes and discusses how the U.S. Congress¡¦ oversight mechanism over the CIA has progressed, thereby determining the key to the successes and failures of the U.S. Congress¡¦ intelligence oversight mechanism. Having analyzed the intelligence oversight mechanism of the U.S. Congress, the writer brings up suggestions as to how our country¡¦s Legislative Yuan can develop an oversight mechanism over the NSA in the future. Lastly, from these discussions, the writer addresses the contributions, propositions, and limitations of this research and hopes that these research and discussions can assist the Legislative Yuan in institutionalizing a comprehensive intelligence oversight mechanism over the NSA.
17

A Personal Documenation System for Scholars: A Tool for Thinking

Burkett, Leslie Stewart 12 1900 (has links)
This exploratory research focused on a problem stated years ago by Vannevar Bush: "The problem is how creative men think, and what can be done to help them think." The study explored the scholarly work process and the use of computer tools to augment thinking. Based on a review of several related literatures, a framework of 7 major categories and 28 subcategories of scholarly thinking was proposed. The literature was used to predict problems scholars have in organizing their information, potential solutions, and specific computer tool features to augment scholarly thinking. Info Select, a personal information manager with most of these features (text and outline processing, sophisticated searching and organizing), was chosen as a potential tool for thinking. The study looked at how six scholars (faculty and doctoral students in social science fields at three universities) organized information using Info Select as a personal documentation system for scholarly work. These multiple case studies involved four in-depth, focused interviews, written evaluations, direct observation, and analysis of computer logs and files collected over a 3- to 6-month period. A content analysis of interviews and journals supported the proposed AfFORD-W taxonomy: Scholarly work activities consisted of Adding, Filing, Finding, Organizing, Reminding, and Displaying information to produce a Written product. Very few activities fell outside this framework, and activities were distributed evenly across all categories. Problems, needs, and likes mentioned by scholars, however, clustered mainly in the filing, finding, and organizing categories. All problems were related to human memory. Both predictions and research findings imply a need for tools that support information storage and retrieval in personal documentation systems, for references and notes, with fast and easy input of source material. A computer tool for thinking should support categorizing and organizing, reorganizing and transporting information. It should provide a simple search engine and support rapid scanning. The research implies the need for tools that provide better affordances for scholarly thinking activities.
18

Upper and Lower Bounds for Text Upper and Lower Bounds for Text Indexing Data Structures

Golynski, Alexander 10 December 2007 (has links)
The main goal of this thesis is to investigate the complexity of a variety of problems related to text indexing and text searching. We present new data structures that can be used as building blocks for full-text indices which occupies minute space (FM-indexes) and wavelet trees. These data structures also can be used to represent labeled trees and posting lists. Labeled trees are applied in XML documents, and posting lists in search engines. The main emphasis of this thesis is on lower bounds for time-space tradeoffs for the following problems: the rank/select problem, the problem of representing a string of balanced parentheses, the text retrieval problem, the problem of computing a permutation and its inverse, and the problem of representing a binary relation. These results are divided in two groups: lower bounds in the cell probe model and lower bounds in the indexing model. The cell probe model is the most natural and widely accepted framework for studying data structures. In this model, we are concerned with the total space used by a data structure and the total number of accesses (probes) it performs to memory, while computation is free of charge. The indexing model imposes an additional restriction on the storage: the object in question must be stored in its raw form together with a small index that facilitates an efficient implementation of a given set of queries, e.g. finding rank, select, matching parenthesis, or an occurrence of a given pattern in a given text (for the text retrieval problem). We propose a new technique for proving lower bounds in the indexing model and use it to obtain lower bounds for the rank/select problem and the balanced parentheses problem. We also improve the existing techniques of Demaine and Lopez-Ortiz using compression and present stronger lower bounds for the text retrieval problem in the indexing model. The most important result of this thesis is a new technique for cell probe lower bounds. We demonstrate its strength by proving new lower bounds for the problem of representing permutations, the text retrieval problem, and the problem of representing binary relations. (Previously, there were no non-trivial results known for these problems.) In addition, we note that the lower bounds for the permutations problem and the binary relations problem are tight for a wide range of parameters, e.g. the running time of queries, the size and density of the relation.
19

Upper and Lower Bounds for Text Upper and Lower Bounds for Text Indexing Data Structures

Golynski, Alexander 10 December 2007 (has links)
The main goal of this thesis is to investigate the complexity of a variety of problems related to text indexing and text searching. We present new data structures that can be used as building blocks for full-text indices which occupies minute space (FM-indexes) and wavelet trees. These data structures also can be used to represent labeled trees and posting lists. Labeled trees are applied in XML documents, and posting lists in search engines. The main emphasis of this thesis is on lower bounds for time-space tradeoffs for the following problems: the rank/select problem, the problem of representing a string of balanced parentheses, the text retrieval problem, the problem of computing a permutation and its inverse, and the problem of representing a binary relation. These results are divided in two groups: lower bounds in the cell probe model and lower bounds in the indexing model. The cell probe model is the most natural and widely accepted framework for studying data structures. In this model, we are concerned with the total space used by a data structure and the total number of accesses (probes) it performs to memory, while computation is free of charge. The indexing model imposes an additional restriction on the storage: the object in question must be stored in its raw form together with a small index that facilitates an efficient implementation of a given set of queries, e.g. finding rank, select, matching parenthesis, or an occurrence of a given pattern in a given text (for the text retrieval problem). We propose a new technique for proving lower bounds in the indexing model and use it to obtain lower bounds for the rank/select problem and the balanced parentheses problem. We also improve the existing techniques of Demaine and Lopez-Ortiz using compression and present stronger lower bounds for the text retrieval problem in the indexing model. The most important result of this thesis is a new technique for cell probe lower bounds. We demonstrate its strength by proving new lower bounds for the problem of representing permutations, the text retrieval problem, and the problem of representing binary relations. (Previously, there were no non-trivial results known for these problems.) In addition, we note that the lower bounds for the permutations problem and the binary relations problem are tight for a wide range of parameters, e.g. the running time of queries, the size and density of the relation.
20

Refinement of PTR-MS methodology and application to the measurement of (O)VOCs from cattle slurry

House, Emily January 2009 (has links)
Oxygenated volatile organic compounds ((O)VOCs) contribute to ozone formation, affect the oxidising capacity of the troposphere and are sources of growth, and in some cases formation, of aerosols. It is therefore important to identify and quantify sources of (O)VOCs in the troposphere. In the late 1990s a unique technique for quantification of organic trace gas species, proton transfer reaction mass spectrometry (PTR-MS) was developed. PTR-MS potentially offers rapid response and high sensitivity without the need for sample pre-concentration. Concentrations can be derived from the PTR-MS either by calibration or can be calculated from measured ion count rates and kinetic considerations. In this work, the methodology of PTR-MS application is critically assessed. The uncertainties and inaccuracies associated with each parameter employed in the calculation of concentrations are reviewed. This includes a critical appraisal of models for the calculation of the collisional rate constant currently applied in the field of PTR-MS. The use of a model to account for the effects of the electric field, available in the literature but not previously applied to the PTR-MS, is advocated. Collisional rate constants employing each of the models discussed have been calculated for the reactions of H3O+ with over 400 molecules for PTR-MS. In PTR-MS it cannot be assumed that the product ion occurs only at the protonated non-dissociated mass. Few product distributions obtained from PTR-MS are cited in the literature, and even then the reaction chamber conditions (pressure, temperature and electric field strength) are not always specified. A large volume of product distributions for trace gases with H3O+ in select ion flow tube mass spectrometry (SIFT) exists in the literature and is reviewed. In SIFT, no electric field is applied to the reaction chamber and the extent and even nature of fragmentation can differ in PTR-MS. In addition to the application of an electric field, the energy in the reaction chamber can be increased by increasing the temperature or by variation of the reagent ion. In this work, the increase in energy via the three methods is approximated to enable a comparison of product distributions. The review of product distributions in PTR-MS, select ion flow drift tube mass spectrometry (SIFDT), variable temperature select ion flow tube mass spectrometry (VT-SIFT), SIFT, proton transfer reaction time of flight mass spectrometry (PTR-TOF-MS), proton transfer reaction ion trap mass spectrometry (PTR-ITMS) and electron ionisation mass spectrometry (EI-MS) is used alongside thermodynamic considerations to collate a list of potential contributors to a range of mass to charge ratios (m/z) in the PTR-MS. The need for further measurements of product distributions as a function of temperature, pressure and electric field strength for a wider range of (O)VOCs is highlighted. This enables dissociation to be better used as a tool for compound identification rather than being considered a hindrance. The collation of likely product distributions is applied to identify possible contributors to m/z observed during PTR-MS measurements of emission from cattle slurry. Field measurements were made during fertilisation of a grassland site south of Edinburgh in 2004 and 2005 and in laboratory-based measurements in 2006. Contextual reasoning, previous measurements and isotope ratios are used to narrow the list of possible contributors. Large concentrations of m/z cautiously identified as alcohols followed by a latter peak in carboxylic acids were observed during laboratory measurements. Increases in the corresponding m/z were also observed during the fertilisations. Other tentatively identified compounds emitted included phenol, methyl phenol, trimethylamine, and various sulphur containing compounds.

Page generated in 0.0492 seconds