• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 597
  • 122
  • 105
  • 79
  • 50
  • 31
  • 17
  • 15
  • 15
  • 13
  • 10
  • 9
  • 7
  • 7
  • 6
  • Tagged with
  • 1228
  • 151
  • 92
  • 86
  • 85
  • 85
  • 83
  • 82
  • 75
  • 75
  • 74
  • 70
  • 67
  • 61
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

The home tutor scheme in the Australian Capital Territory

Oner, J. A., n/a January 1985 (has links)
This study sets out to describe the current situation in the Home Tutor Scheme in the Australian Capital Territory, and to evaluate the Scheme's effectiveness in achieving its goals as listed in the Australian Institute of Multicultural Affairs Review (1980). These stated goals were: to improve the students' English language proficiency, to encourage integration of the students into the wider community, and to prepare them to attend more formal English language classes. The writer also considered a further question in evaluating the Scheme, whether it satisfied the needs and expectations of the tutors and the students. There were two sections to the investigation: the main study, in which the progress of eighteen tutors and their students was followed for a period of up to six months, and a subsidiary study that was designed to assess the generalisability of the data elicited in the main study. A range of instruments were employed. In the main study, findings were derived principally from interviews, and from lesson reports written by tutors. In the subsidiary study, data were collected by means of questionnaires issued to a greater number of tutors and to students from the Scheme's four major language backgrounds. The introductory chapter sets out the purpose of the study and explains its relevance in the current Australian context. This is followed, in Chapter 2, by a review of the relevant literature and previous research. The design of the study is set out in Chapter 3, where details are given of the procedures and instruments employed to gather data. In Chapters A, 5 and 6, the results of the study are presented. Discussion of these results and a consideration of their implications may be found in Chapter 7. In the final chapter, Chapter 8, the findings are summarised and recommendations are made for future developments in the Scheme. In summary, the study found that in the ACT the Scheme was achieving some success in its language teaching and social objectives, and in satisfying its student clientele. It was also found, however, that the Scheme's operational efficiency was hampered by the low level of staffing and that a significant number of tutors withdrew from the Scheme after a short period because they were not experiencing a high level of satisfaction. The recommendations made would, it is thought, lead to greater efficiency of organisation and could raise the level of tutor satisfaction.
272

Issues of efficiency and equity in the direct subsidy scheme from the parents' perspective

Wan, Ho-yee, Condy. January 1990 (has links)
Thesis (M.Ed.)--University of Hong Kong, 1990. / Includes bibliographical references (leaf 160-163). Also available in print.
273

The Possibility and Effects of Including the Transport Sector in the EU Emission Trading Scheme

Eckerhall, Daniel January 2005 (has links)
<p>The European Union has initiated a scheme for trading with CO2 emission allowances as a measure to reduce greenhouse gas emission levels. Since January 2005 companies from certain energy demanding sectors, responsible for approximately 50 % of the total CO2 emissions in the EU, are participating in this scheme, the so called EU Emission Trading Scheme.</p><p>A trading scheme covering all sectors, i.e. all emissions in the EU would lead to the most cost efficient solution to reduce emissions by a certain amount. This means that the EU Emission Trading Scheme should be enlarged to cover also the transport sector, which is not participating today, but responsible for about 21 % of the total greenhouse gas emissions in the EU.</p><p>There are three ways to include the transport sector in the EU Emission Trading Scheme, i.e. to administrate the handling and trading of emission allowances in the transport sector. The first is a so called downstream approach, meaning that the actual emitter of the GHG, in this case a private person driving a car or a haulage contractor using trucks to transport goods, would be responsible for acquiring and trading emission allowances in accordance to the amount of greenhouse gases that he emits. The second way is a so called upstream approach, meaning that the owner of fuel depots would be responsible for acquiring and trading emission allowances corresponding to the amount of fossil fuel that he is selling, which is proportional to the amount of greenhouse gases that is emitted when using the fuel. The third solution is to lay the responsibility for acquiring and trading emission allowances on the companies that are ordering the transportation service, indirectly causing greenhouse gas emissions when their goods are being transported.</p><p>All three solutions have their advantages and disadvantages, but the benefits of using the upstream approach are the greatest. By allocating the responsibility for keeping and trading emission allowances at the fuel depots, an extensive part of greenhouse gas emissions from fossil fuel use, not only in the transport sector, could be covered by the EU Emission Trading Scheme to the lowest administrational cost possible.</p>
274

A Dynamically Partitionable Compressed Cache

Chen, David, Peserico, Enoch, Rudolph, Larry 01 1900 (has links)
The effective size of an L2 cache can be increased by using a dictionary-based compression scheme. Naive application of this idea performs poorly since the data values in a cache greatly vary in their “compressibility.” The novelty of this paper is a scheme that dynamically partitions the cache into sections of different compressibilities. While compression is often researched in the context of a large stream, in this work it is applied repeatedly on smaller cache-line sized blocks so as to preserve the random access requirement of a cache. When a cache-line is brought into the L2 cache or the cache-line is to be modified, the line is compressed using a dynamic, LZW dictionary. Depending on the compression, it is placed into the relevant partition. The partitioning is dynamic in that the ratio of space allocated to compressed and uncompressed varies depending on the actual performance, Certain SPEC-2000 benchmarks using a compressed L2 cache show an 80reduction in L2 miss-rate when compared to using an uncompressed L2 cache of the same area, taking into account all area overhead associated with the compression circuitry. For other SPEC-2000 benchmarks, the compressed cache performs as well as a traditional cache that is 4.3 times as large as the compressed cache in terms of hit rate, The adaptivity ensures that, in terms of miss rates, the compressed cache never performs worse than a traditional cache. / Singapore-MIT Alliance (SMA)
275

Detached-Eddy Simulation of Flow Non-Linearity of Fluid-Structural Interactions using High Order Schemes and Parallel Computation

Wang, Baoyuan 09 May 2009 (has links)
The objective of this research is to develop an efficient and accurate methodology to resolve flow non-linearity of fluid-structural interaction. To achieve this purpose, a numerical strategy to apply the detached-eddy simulation (DES) with a fully coupled fluid-structural interaction model is established for the first time. The following novel numerical algorithms are also created: a general sub-domain boundary mapping procedure for parallel computation to reduce wall clock simulation time, an efficient and low diffusion E-CUSP (LDE) scheme used as a Riemann solver to resolve discontinuities with minimal numerical dissipation, and an implicit high order accuracy weighted essentially non-oscillatory (WENO) scheme to capture shock waves. The Detached-Eddy Simulation is based on the model proposed by Spalart in 1997. Near solid walls within wall boundary layers, the Reynolds averaged Navier-Stokes (RANS) equations are solved. Outside of the wall boundary layers, the 3D filtered compressible Navier-Stokes equations are solved based on large eddy simulation(LES). The Spalart-Allmaras one equation turbulence model is solved to provide the Reynolds stresses in the RANS region and the subgrid scale stresses in the LES region. An improved 5th order finite differencing weighted essentially non-oscillatory (WENO) scheme with an optimized epsilon value is employed for the inviscid fluxes. The new LDE scheme used with the WENO scheme is able to capture crisp shock profiles and exact contact surfaces. A set of fully conservative 4th order finite central differencing schemes are used for the viscous terms. The 3D Navier-Stokes equations are discretized based on a conservative finite differencing scheme, which is implemented by shifting the solution points half grid interval in each direction on the computational domain. The solution points are hence located in the center of the grid cells in the computational domain (not physical domain). This makes it possible to use the same code structure as a 2nd order finite volume method. A finite differencing high order WENO scheme is used since a finite differencing WENO scheme is much more efficient than a finite volume WENO scheme. The unfactored line Gauss-Seidel relaxation iteration is employed for time marching. For the time accurate unsteady simulation, the temporal terms are discretized using the 2nd order accuracy backward differencing. A pseudo temporal term is introduced for the unsteady calculation following Jameson's method. Within each physical time step, the solution is iterated until converged based on pseudo time step. A general sub-domain boundary mapping procedure is developed for arbitrary topology multi-block structured grids with grid points matched on sub-domain boundaries. The interface of two adjacent blocks is uniquely defined according to each local mesh index system (MIS) which is specified independently. A pack/unpack procedure based on the definition of the interface is developed to exchange the data in a 1D array to minimize data communication. A secure send/receive procedure is employed to remove the possibility of blocked communication and achieve optimum parallel computation efficiency. Two terms, "Order" and "Orientation", are introduced as the logics defining the relationship of adjacent blocks. The domain partitioning treatment of the implicit matrices is to simply discard the corner matrices so that the implicit Gauss-Seidel iteration can be implemented within each subdomain. This general sub-domain boundary mapping procedure is demonstrated to have high scalability. Extensive numerical experiments are conducted to test the performance of the numerical algorithms. The LDE scheme is compared with the Roe scheme for their behavior with RANS simulation. Both the LDE and the Roe scheme can use high CFL numbers and achieve high convergence rates for the algebraic Baldwin-Lomax turbulence model. For the Spalart-Allmaras one equation turbulence model, the extra equation changes the Jacobian of the Roe scheme and weakens the diagonal dominance. It reduces the maximum CFL number permitted by the Roe scheme and hence decreases the convergence rate. The LDE scheme is only slightly affected by the extra equation and maintains high CFL number and convergence rate. The high stability and convergence rate using the Spalart-Allmaras one equation turbulence model is important since the DES uses the same transport equation for the turbulence stresses closure. The RANS simulation with the Spalart-Allmaras one equation turbulence model is the foundation for DES and is hence validated with other transonic flows including a 2D subsonic flat plate turbulent boundary layer, 2D transonic inlet-diffuser, 2D RAE2822 airfoil, 3D ONERA M6 wing, and a 3D transonic duct with shock boundary layer interaction. The predicted results agree very well with the experiments. The RANS code is then further used to study the slot size effect of a co-flow jet (CFJ) airfoil. The DES solver with fully coupled fluid-structural interaction methodology is validated with vortex induced vibration of a cylinder and a transonic forced pitching airfoil. For the cylinder, the laminar Navier-Stokes equations are solved due to the low Reynolds number. The 3D effects are observed in both stationary and oscillating cylinder simulation because of the flow separations behind the cylinder. For the transonic forced pitching airfoil DES computation, there is no flow separation in the flow field. The DES results agree well with the RANS results. These two cases indicate that the DES is more effective on predicting flow separation. The DES code is used to simulate the limited cycle oscillation of NLR7301 airfoil. For the cases computed in this research, the predicted LCO frequency, amplitudes, averaged lift and moment, all agree excellently with the experiment. The solutions appear to have bifurcation and are dependent on the initial perturbation. The developed methodology is able to capture the LCO with very small amplitudes measured in the experiment. This is attributed to the high order low diffusion schemes, fully coupled FSI model, and the turbulence model used. This research appears to be the first time that a numerical simulation of LCO matches the experiment. The DES code is also used to simulate the CFJ airfoil jet mixing at high angle of attack. In conclusion, the numerical strategy of the high order DES with fully coupled FSI model and parallel computing developed in this research is demonstrated to have high accuracy, robustness, and efficiency. Future work to further maturate the methodology is suggested.
276

The Consent Scheme in Hong Kong its evolution and evaluation : home buyer behaviour in Housing Society's property transactions before and after the Asian financial crisis /

Fan, Chi-sun. January 2005 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2005. / Title proper from title frame. Also available in printed format.
277

Numerical Simulation of Breaking Waves Using Level-Set Navier-Stokes Method

Dong, Qian 2010 May 1900 (has links)
In the present study, a fifth-order weighted essentially non-oscillatory (WENO) scheme was built for solving the surface-capturing level-set equation. Combined with the level-set equation, the three-dimensional Reynolds averaged Navier-Stokes (RANS) equations were employed for the prediction of nonlinear wave-interaction and wave-breaking phenomena over sloping beaches. In the level-set finite-analytic Navier-Stokes (FANS) method, the free surface is represented by the zero level-set function, and the flows are modeled as immiscible air-water two phase flows. The Navier-Stokes equations for air-water two phase flows are formulated in a moving curvilinear coordinate system and discretized by a 12-point finite-analytical scheme using the finite-analytic method on a multi-block over-set grid system. The Pressure Implicit with Splitting of Operators / Semi-Implicit Method for Pressure-Linked Equation Revised (PISO/SIMPLER) algorithm was used to determine the coupled velocity and pressure fields. The evolution of the level-set method was solved using the third-order total variation diminishing (TVD) Runge-Kutta method and fifth-order WENO scheme. The accuracy was confirmed by solving the Zalesak's problem. Two major subjects are discussed in the present study. First, to identify the WENO scheme as a more accurate scheme than the essentially non-oscillatory scheme (ENO), the characteristics of a nonlinear monochromatic wave were studied systematically and comparisons of wave profiles using the two schemes were conducted. To eliminate other factors that might produce wave profile fluctuation, different damping functions and grid densities were studied. To damp the reflection waves efficiently, we compared five damping functions. The free-surface elevation data collected from gauges distributed evenly in a numerical wave tank are analyzed to demonstrate the damping effect of the beach. Second, as a surface-tracking numerical method built on curvilinear coordinates, the level-set RANS model was tested for nonlinear bichromatic wave trains and breaking waves on a sloping beach with a complex free surface. As the wave breaks, the velocity of the fluid flow surface became more complex. Numerical modeling was performed to simulate the two-phase flow velocity and its corresponding surface and evolution when the wave passed over different sloping beaches. The breaking wave test showed that it is an efficient technique for accurately capturing the breaking wave free surface. To predict the breaking points, different wave heights and beach slopes are simulated. The results show that the dependency of wave shape and breaking characteristics to wave height and beach slope match the results provided by experiments.
278

Epistemological Development in Pre-Ministry Undergraduates: A Cross-Institutional Application of the Perry Scheme

Trentham, John David 14 December 2012 (has links)
The intent of this study was to explore the variance of epistemological development in pre-ministry undergraduates across different institutional contexts, using the Perry Scheme as a theoretical lens. Semi-structured interviews were employed in order to elicit information from participants that revealed their personal perspectives regarding their approaches to acquiring, maintaining, and implementing knowledge. Students from three institutional contexts were included in this study: secular university, confessional Christian liberal arts university, and Bible college. A review of the precedent literature for this research presented foundational biblical-theological and theoretical sources that defined and elucidated the context of this study. The biblical-theological analysis first identified the nature of human knowledge and development within the context of the redemptive-historical metanarrative. Then, two prominent biblical themes that relate specifically to epistemological development were treated: the knowledge of God and biblical wisdom. A thorough review of the Perry Scheme was then provided, including theoretical and philosophical underpinnings, the model itself, and major extensions and elaborations of Perry's model. A final section introduced the "principle of inverse consistency" as a paradigm for interacting with Perry and other developmental theories, from a biblical worldview. The qualitative research design consisted of five steps. First, the researcher contacted and enlisted students and obtained a Dissertation Study Participation Form from each participant. Second, a customized interview protocol was designed according to the Perry Interview Protocol, in conjunction with the Center for the Study of Intellectual Development (CSID). Third, a pilot study was undertaken. Fourth, one interview was conducted with each participant, and the interviews were transcribed and submitted to the CSID for scoring. Fifth, in addition to the scoring analysis performed by the CSID, the researcher designed and implemented an independent content analysis procedure, including a structured analytical framework of epistemological priorities and competencies. Finally, the scored data and content analysis results were evaluated together, and interpreted by the researcher to yield findings and implications. Overall, this research observed that epistemological positioning was generally consistent among pre-ministry students from differing institutional contexts. The CSID's stated majority rating for typical college graduates was reflected in each sample grouping-a point of transition between Positions 3 and 4, defined in the Perry Scheme as mid to late "Multiplicity." By certain measures, however, scores among context groups were distinguishable. For example, average scores for secular university students reflected a point very near, but slightly above Position 3, while average ratings among Bible college and liberal arts university students reflected a point essentially midway between Positions 3 and 4. Also, when a filter was applied that eliminated the results of the oldest and youngest sample participants, the liberal arts university grouping reflected a distinguishably higher epistemological position than other groupings. Evaluation of the research interview data according to the researcher's structured framework of epistemological priorities and competencies yielded findings that were consistent overall with the variations of levels of epistemological positioning as reported by the CSID. In addition, numerous prominent themes emerged from analysis of interviewees' articulations that were identified as bearing relevance to participants' epistemological maturation. Finally, the impact of effects of differing social-academic cultures on pre-ministry undergraduates' epistemological perspectives and maturation were examined. Evaluation of these themes and environmental conditions served to highlight numerous conformities as well as significant distinctions among pre-ministry students from differing institutional contexts.
279

Performance Analysis and Deployment Techniques forWireless Sensor Networks

She, Huimin January 2012 (has links)
Recently, wireless sensor network (WSN) has become a promising technology with a wide range of applications such as supply chain monitoring and environment surveillance. It is typically composed of multiple tiny devices equipped with limited sensing, computing and wireless communication capabilities. Design of such networks presents several technique challenges while dealing with various requirements and diverse constraints. Performance analysis and deployment techniquesare required to provide insight on design parameters and system behaviors. Based on network calculus, a deterministic analysis method is presented for evaluating the worst-case delay and buffer cost of sensor networks. To this end,traffic splitting and multiplexing models are proposed and their delay and buffer bounds are derived. These models can be used in combination to characterize complex traffic flowing scenarios. Furthermore, the method integrates a variable duty cycle to allow the sensor nodes to operate at low rates thus saving power. In an attempt to balance traffic load and improve resource utilization and performance,traffic splitting mechanisms are introduced for sensor networks with general topologies. To provide reliable data delivery in sensor networks, retransmission has been one of the most popular schemes. We propose an analytical method to evaluate the maximum data transmission delay and energy consumption of two types of retransmission schemes: hop-by-hop retransmission and end-to-end retransmission.In order to validate the tightness of the bounds obtained by the analysis method, the simulation results and analytical results are compared with various input traffic loads. The results show that the analytic bounds are correct and tight. Stochastic network calculus has been developed as a useful tool for Qualityof Service (QoS) analysis of wireless networks. We propose a stochastic servicecurve model for the Rayleigh fading channel and then provide formulas to derive the probabilistic delay and backlog bounds in the cases of deterministic and stochastic arrival curves. The simulation results verify that the tightness of the bounds are good. Moreover, a detailed mechanism for bandwidth estimation of random wireless channels is developed. The bandwidth is derived from the measurement of statistical backlogs based on probe packet trains. It is expressed by statistical service curves that are allowed to violate a service guarantee with a certain probability. The theoretic foundation and the detailed step-by-step procedure of the estimation method are presented. One fundamental application of WSNs is event detection in a Field of Interest(FoI), where a set of sensors are deployed to monitor any ongoing events. To satisfy a certain level of detection quality in such applications, it is desirable that events in the region can be detected by a required number of sensors. Hence, an important problem is how to conduct sensor deployment for achieving certain coverage requirements. In this thesis, a probabilistic event coverage analysis methodis proposed for evaluating the coverage performance of heterogeneous sensor networks with randomly deployed sensors and stochastic event occurrences. Moreover,we present a framework for analyzing node deployment schemes in terms of three performance metrics: coverage, lifetime, and cost. The method can be used to evaluate the benefits and trade-offs of different deployment schemes and thus provide guidelines for network designers. / <p>QC 20120906</p>
280

Protein Secondary Structure Prediction Using Support Vector Machines, Nueral Networks and Genetic Algorithms

Reyaz-Ahmed, Anjum B 03 May 2007 (has links)
Bioinformatics techniques to protein secondary structure prediction mostly depend on the information available in amino acid sequence. Support vector machines (SVM) have shown strong generalization ability in a number of application areas, including protein structure prediction. In this study, a new sliding window scheme is introduced with multiple windows to form the protein data for training and testing SVM. Orthogonal encoding scheme coupled with BLOSUM62 matrix is used to make the prediction. First the prediction of binary classifiers using multiple windows is compared with single window scheme, the results shows single window not to be good in all cases. Two new classifiers are introduced for effective tertiary classification. This new classifiers use neural networks and genetic algorithms to optimize the accuracy of the tertiary classifier. The accuracy level of the new architectures are determined and compared with other studies. The tertiary architecture is better than most available techniques.

Page generated in 0.0531 seconds