• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 18
  • 15
  • 8
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 165
  • 26
  • 19
  • 15
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

台灣貨幣與所得因果關係之研究

梁思瑜, LIANG,SI-YU Unknown Date (has links)
近年來鉅額貿易出超,台幣持續升值,貨幣數量急速擴張以及股票與房地產交易異常 熱絡下,反映出台灣經濟與金融情勢的變動。在此變遷下,中央銀行是否可藉貨幣政 策來影響所得,為學術與實務上重要之研究課題。 另一方面,貨幣理論發展至今,仍存在許多爭論,其根源在於各學說所強調的重點與 影響途徑不同,使分析結果產生差異,因此國內學者對有關之研究,或因模型設定之 不同與選擇期間之差異,所得結果頗為分岐。 貨幣與所得或物價間單向或雙向影響之爭論由來已久,Granger 與Sims等人提出因果 關係檢定方法來解決,為實證研究之一項重大突破,然而由於方法上若干缺失,故實 證結果常有出入。本論文除針對傳統模型實證方法上之缺失,設法修正外,更進一步 分析總體數列非恆定以及共異積情況下之因果關係檢定,希望藉有系統而且較可靠之 方法探討貨幣與所得之基本關係,以供政策採行、修改。 為確實檢討與修正實證方法,以及獲得穩健、可靠台灣實證結果,本研究計劃結構如 下: 第一章:前言:研究動機以前研究的檢討。 第二章:單根檢定方法與結果:兼採Perron-Phillips 與Stock-Watson 方法。 第三章:Cointegration 檢定方法與結果。 第四章:非恆定下因果關係之檢定。 第五章:實證結果。 第六章:結論。 在不同模型設定下,系統性因果檢定,以期充分且正確地發現過去貨幣與所得之關連 ,以作未來經濟預測以及政策采行之參考。
52

Estudiando la validez de IMB Watson Personality Insights en una muestra de estudiantes FEN como un acercamiento al perfil de cargo para la selección de personal

Díaz Chamorro, Héctor, López Leyton, Romina January 2015 (has links)
Seminario para optar al título de Ingeniero Comercial, Mención Administración / La creación de la supercomputadora IBM Watson se ha puesto sobre la mesa la utilización de programas que pueden analizar una gran cantidad de datos casi instantáneamente, entregando una nueva gama de herramientas para los equipos de trabajo y organizaciones en pos de mejorar su toma de decisiones estratégicas: como la selección de personal. Para esta investigación se realizó un estudio que contrastó un test de personalidad basado en las 5 grandes dimensiones de personalidad (BIG 5) con los resultados entregados por el programa de IBM para mostrar su grado de validez en un contexto cerrado y grupo acotado de participantes. Se obtuvo información de 47 estudiantes de la Facultad de Economía y Negocios de la Universidad de Chile a través de la contestación del cuestionario y el análisis psicoléxico de un ensayo personal. Los resultados muestran que: (1) La presencia de las dimensiones de Agradabilidad (A) y Neuroticismo (N) es lo que detecta con menor dificultad IBM Watson Personality Insights. (2) A pesar de las diferencias en rendimiento académico y opción de especialización, todas las dimensiones de personalidad arrojaron resultados muy similares en pesos relativos, relacionados con la cultura de la organización.
53

L'expérience d'enfants d'âge scolaire recevant une greffe de cellules souches hématopoïétiques

Laroche, Mélissa January 2007 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
54

'Latitudinarian traditours' : Bishops Hoadly and Watson

Dearnley, John January 2014 (has links)
No description available.
55

CPU Performance Evaluation for 2D Voronoi Tessellation

Olsson, Victor, Eklund, Viktor January 2019 (has links)
Voronoi tessellation can be used within a couple of different fields. Some of these fields include healthcare, construction and urban planning. Since Voronoi tessellations are used in multiple fields, it is motivated to know the strengths and weaknesses of the algorithms used to generate them, in terms of their efficiency. The objectives of this thesis are to compare two CPU algorithm implementations for Voronoi tessellation in regards to execution time and see which of the two is the most efficient. The algorithms compared are The Bowyer-Watson algorithm and Fortunes algorithm. The Fortunes algorithm used in the research is based upon a pre-existing Fortunes implementation while the Bowyer-Watson implementation was specifically made for this research. Their differences in efficiency were determined by measuring their execution times and comparing them. This was done in an iterative manner, where for each iteration, the amount of data to be computed was continuously increased. The results show that Fortunes algorithm is more efficient on the CPU without using any acceleration techniques for any of the algorithms. It took 70 milliseconds for the Bowyer-Watson method to calculate 3000 input points while Fortunes method took 12 milliseconds under the same conditions. As a conclusion, Fortunes algorithm was more efficient due to the Bowyer-Watson algorithm doing unnecessary calculations. These calculations include checking all the triangles for every new point added. A suggestion for improving the speed of this algorithm would be to use a nearest neighbour search technique when searching through triangles.
56

The coalescent structure of continuous-time Galton-Watson trees

Johnston, Samuel January 2018 (has links)
No description available.
57

Traditional Plus: Doc Watson's Transformation of Appalachian Music/Culture on the World's Stage

Olson, Ted S. 18 April 2019 (has links)
No description available.
58

The side-by-side model of DNA: logic in a scientific invention

Stokes, Terence Douglas January 1983 (has links)
Watson and Crick’s double-helical model of DNA is considered to be one of the great discoveries in biology. However, in 1976, two groups of scientists, one in New Zealand, the other in India, independently published essentially the same radical alternative to the double helix. The alternative, Side-By-Side (SBS) or ‘warped zipper’ conformation for DNA is not helical. Rather than intertwine, as do Watson and Crick’s helices, its two exoskeletal strands are topologically independent. Thus, unlike the double helix, they may separated during replication without unwinding. This dissertation presents, but does not arbitrate among scientific arguments. Its concerns are meta-scientific; in particular, why and how the individuals who invented the & ‘warped zipper’ came to do so. Against Popper and most recent philosophers of science, it is taken to be “the business of epistemology to produce what has been called a ‘rational reconstruction’ of the steps that have led the scientist to a discovery [Popper (1972), p.31, emphasis in the original].” On the received view, the invention of the ‘warped zipper’ must be irrational or, at best, non-rational thereby excluding from philosophical investigation. I establish that this philosophical dogma is not true a priori, as is usually supposed, and, in the case of the SBS structure of DNA, false a posteriori. The motivation for, and development of the SBS structure for DNA reveals a process best characterized as significantly, though not entirely, rational.
59

Management of contraction : a case study

Rooney, J. A. J., n/a January 1980 (has links)
n/a
60

Sequential Procedures for Nonparametric Kernel Regression

Dharmasena, Tibbotuwa Deniye Kankanamge Lasitha Sandamali, Sandamali.dharmasena@rmit.edu.au January 2008 (has links)
In a nonparametric setting, the functional form of the relationship between the response variable and the associated predictor variables is unspecified; however it is assumed to be a smooth function. The main aim of nonparametric regression is to highlight an important structure in data without any assumptions about the shape of an underlying regression function. In regression, the random and fixed design models should be distinguished. Among the variety of nonparametric regression estimators currently in use, kernel type estimators are most popular. Kernel type estimators provide a flexible class of nonparametric procedures by estimating unknown function as a weighted average using a kernel function. The bandwidth which determines the influence of the kernel has to be adapted to any kernel type estimator. Our focus is on Nadaraya-Watson estimator and Local Linear estimator which belong to a class of kernel type regression estimators called local polynomial kerne l estimators. A closely related problem is the determination of an appropriate sample size that would be required to achieve a desired confidence level of accuracy for the nonparametric regression estimators. Since sequential procedures allow an experimenter to make decisions based on the smallest number of observations without compromising accuracy, application of sequential procedures to a nonparametric regression model at a given point or series of points is considered. The motivation for using such procedures is: in many applications the quality of estimating an underlying regression function in a controlled experiment is paramount; thus, it is reasonable to invoke a sequential procedure of estimation that chooses a sample size based on recorded observations that guarantees a preassigned accuracy. We have employed sequential techniques to develop a procedure for constructing a fixed-width confidence interval for the predicted value at a specific point of the independent variable. These fixed-width confidence intervals are developed using asymptotic properties of both Nadaraya-Watson and local linear kernel estimators of nonparametric kernel regression with data-driven bandwidths and studied for both fixed and random design contexts. The sample sizes for a preset confidence coefficient are optimized using sequential procedures, namely two-stage procedure, modified two-stage procedure and purely sequential procedure. The proposed methodology is first tested by employing a large-scale simulation study. The performance of each kernel estimation method is assessed by comparing their coverage accuracy with corresponding preset confidence coefficients, proximity of computed sample sizes match up to optimal sample sizes and contrasting the estimated values obtained from the two nonparametric methods with act ual values at given series of design points of interest. We also employed the symmetric bootstrap method which is considered as an alternative method of estimating properties of unknown distributions. Resampling is done from a suitably estimated residual distribution and utilizes the percentiles of the approximate distribution to construct confidence intervals for the curve at a set of given design points. A methodology is developed for determining whether it is advantageous to use the symmetric bootstrap method to reduce the extent of oversampling that is normally known to plague Stein's two-stage sequential procedure. The procedure developed is validated using an extensive simulation study and we also explore the asymptotic properties of the relevant estimators. Finally, application of our proposed sequential nonparametric kernel regression methods are made to some problems in software reliability and finance.

Page generated in 0.0256 seconds