• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 548
  • 275
  • 80
  • 78
  • 71
  • 30
  • 26
  • 25
  • 19
  • 17
  • 10
  • 7
  • 6
  • 6
  • 6
  • Tagged with
  • 1474
  • 188
  • 146
  • 106
  • 104
  • 96
  • 94
  • 89
  • 81
  • 68
  • 68
  • 66
  • 60
  • 58
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

The Q Theory of Housing Investment in Taiwan ¡X An Empirical Test

Chen, Chien-Cheng 24 July 2012 (has links)
Housing investment plays a vital role in the real estate market. Although the housing investment has been extensively investigated, the application of Tobin¡¦s Q theory is relatively minor. Hence, the purpose of this study is to apply Tobin¡¦s Q theory to analyze housing investment, using quarterly data for Taipei City from 1973 Q2 to 2010 Q4. The Q ratio numerator is the pre-sale housing price and the denominator represents the value of the rent. The empirical model is estimated by using building permits and use permits as measures of housing investment. Moreover, because the housing market is imperfect, this study applies the threshold regression model to test whether different effects exist in the Q ratio. Finally, this study also compares housing investment in five cities. In conclusion, these findings imply that the Q ratio has a positive relationship with housing investment, as well as a threshold effect. Furthermore, the local housing investments are affected differently by local variables.
432

Detecting Near-Duplicate Documents using Sentence-Level Features and Machine Learning

Liao, Ting-Yi 23 October 2012 (has links)
From the large scale of documents effective to find the near-duplicate document, has been a very important issue. In this paper, we propose a new method to detect near-duplicate document from the large scale dataset, our method is divided into three parts, feature selection, similarity measure and discriminant derivation. In feature selection, document will be detected after preprocessed. Documents have to remove signals, stop words ... and so on. We measure the value of the term weight in the sentence, and then choose the terms which have higher weight in the sentence. These terms collected as a feature of the document. The document¡¦s feature set collected by these features. Similarity measure is based on similarity function to measure the similarity value between two feature sets. Discriminant derivation is based on support vector machine which train a classifiers to identify whether a document is a near-duplicate or not. support vector machine is a supervised learning strategy. It trains a classifier by the training patterns. In the characteristics of documents, the sentence-level features are more effective than terms-level features. Besides, learning a discriminant by SVM can avoid trial-and-error efforts required in conventional methods. Trial-and-error is going to find a threshold, a discriminant value to define document¡¦s relation. In the final analysis of experiment, our method is effective in near-duplicate document detection than other methods.
433

Design and implementation of a sub-threshold wireless BFSK transmitter

Paul, Suganth 15 May 2009 (has links)
Power Consumption in VLSI (Very Large Scale Integrated) circuits is currently a major issue in the semiconductor industry. Power is a first order design constraint in many applications. Several of these applications need extreme low power but do not need high speed. Sub-threshold circuit design can be used in these cases, but at such a low supply voltage these circuits exhibit an exponential sensitivity to process, voltage and temperature (PVT) variations. In this thesis we implement and test a robust sub-threshold design flow which uses circuit level PVT compensation to stabilize circuit performance. This is done by dynamic modulation of the delay of a representative signal in the circuit and then phase locking it with an external reference signal. We design and fabricate a sub-threshold wireless BFSK transmitter chip. The transmitter is specified to transmit baseband signals up to a data rate of 32kbps over a distance of 1000m. In addition to the sub-threshold implementation, we implement the BFSK transmitter using a standard cell methodology on the same die operating at super-threshold voltages on a different voltage domain. Experiments using the fabricated die show that the sub-threshold circuit consumes 19.4x lower power than the traditional standard cell based implementation.
434

Effect of an upper temperature threshold on heat unit calculations, defoliation timing, lint yield, and fiber quality in cotton

Fromme, Daniel D. 15 May 2009 (has links)
Crop managers need to determine the most profitable time to defoliate cotton (Gossypium hirsutum L.) in a high rainfall environment such as the coastal region of Texas. In cotton production, delaying defoliation exposes open bolls to a higher probability of rainfall, and thus, reduces lint yield and fiber quality. Premature defoliation, however, has detrimental affects on lint yield and fiber quality. A more recent method to determine defoliation is based on heat-unit (HU or DD15) accumulation after physiological cutout or five nodes above white flower (NAWF=5). Results have been inconsistent across a wide range of field environments when utilizing HU accumulation past cutout; therefore, adoption of this method has been limited. Many regions of the Cotton Belt have maximum day time temperatures during the growing season that are above optimum for maximum growth. Field studies were conducted for three consecutive growing seasons in the Brazos River Valley and Upper Gulf Coast regions of Texas. The purpose of this research was to identify an upper temperature threshold (UTT) for calculating degree days for defoliation timing. The experimental design consisted of a split-plot design with four replications. The main plots consisted of three upper temperature thresholds (32°C, 35°C, and no upper limit) and the subplots were five HU timings (361, 417, 472, 528, and 583) accumulated from date of cutout. Utilizing an UTT to calculate daily HU failed to explain differences in the optimum time to defoliate based on accumulated HU from cutout for the upper thresholds investigated. Accumulated HU had a significant impact, however, on defoliation timing. Comparison of the two locations showed that maximum lint yield was obtained at 472 HU and 52% open boll at Wharton County versus a maximum of 528 HU and 62% open boll for the Burleson County location. Employing the NACB=4 method to time defoliation at both locations would have resulted in premature application of harvest aids and reduced lint yields. No differences were observed in adjusted gross income values at Wharton County among the 417, 472, 528, and 583 HU treatments. For Burleson County, adjusted gross income peaked in value at 528 HU.
435

An analysis of Texas rainfall data and asymptotic properties of space-time covariance estimators

Li, Bo 02 June 2009 (has links)
This dissertation includes two parts. Part 1 develops a geostatistical method to calibrate Texas NexRad rainfall estimates using rain gauge measurements. Part 2 explores the asymptotic joint distribution of sample space-time covariance estimators. The following two paragraphs briefly summarize these two parts, respectively. Rainfall is one of the most important hydrologic model inputs and is considered a random process in time and space. Rain gauges generally provide good quality data; however, they are usually too sparse to capture the spatial variability. Radar estimates provide a better spatial representation of rainfall patterns, but they are subject to substantial biases. Our calibration of radar estimates, using gauge data, takes season, rainfall type and rainfall amount into account, and is accomplished via a combination of threshold estimation, bias reduction, regression techniques and geostatistical procedures. We explore a varying-coefficient model to adapt to the temporal variability of rainfall. The methods are illustrated using Texas rainfall data in 2003, which includes WAR-88D radar-reflectivity data and the corresponding rain gauge measurements. Simulation experiments are carried out to evaluate the accuracy of our methodology. The superiority of the proposed method lies in estimating total rainfall as well as point rainfall amount. We study the asymptotic joint distribution of sample space-time covariance esti-mators of stationary random fields. We do this without any marginal or joint distri-butional assumptions other than mild moment and mixing conditions. We consider several situations depending on whether the observations are regularly or irregularly spaced, and whether one part or the whole domain of interest is fixed or increasing. A simulation experiment illustrates the asymptotic joint normality and the asymp- totic covariance matrix of sample space-time covariance estimators as derived. An extension of this part develops a nonparametric test for full symmetry, separability, Taylor's hypothesis and isotropy of space-time covariances.
436

Criteria Combinations in the Personality Disorders: Challenges Associated with a Polythetic Diagnostic System

Cooper, Luke D. 2010 May 1900 (has links)
Converging research on the diagnostic criteria for personality disorders (PDs) reveals that most criteria have different psychometric properties. This finding is inconsistent with the DSM-IV-TR PD diagnostic system, which weights each criterion equally. The purpose of the current study was to examine the potential effects of using equal weights for differentially-functioning criteria. Using data from over 2,100 outpatients, response patterns to the diagnostic criteria for nine PDs were analyzed and scored within an item response theory (IRT) framework. Results indicated that combinations that included the same number of endorsed criteria (the same "raw score") yielded differing estimates of PD traits, depending on which criteria were met. Moreover, trait estimates from subthreshold criteria combinations often overlapped with diagnostic combinations (i.e., at threshold or higher), indicating that there were subthreshold combinations of criteria that indicated as much or more PD traits than some combinations at the diagnostic threshold. These results suggest that counting the number of criteria an individual meets provides only a coarse estimation of their PD trait level. Suggestions for the improved measurement of polythetically-defined mental disorders are discussed.
437

Making Diagnostic Thresholds Less Arbitrary

Unger, Alexis Ariana 2011 May 1900 (has links)
The application of diagnostic thresholds plays an important role in the classification of mental disorders. Despite their importance, many diagnostic thresholds are set arbitrarily, without much empirical support. This paper seeks to introduce and analyze a new empirically based way of setting diagnostic thresholds for a category of mental disorders that has historically had arbitrary thresholds, the personality disorders (PDs). I analyzed data from over 2,000 participants that were part of the Methods to Improve Diagnostic Assessment and Services (MIDAS) database. Results revealed that functional outcome scores, as measured by Global Assessment of Functioning (GAF) scores, could be used to identify diagnostic thresholds and that the optimal thresholds varied somewhat by personality disorder (PD) along the spectrum of latent severity. Using the Item response theory (IRT)-based approach, the optimal threshold along the spectrum of latent severity for the different PDs ranged from θ = 1.50 to 2.25. Effect sizes using the IRT-based approach ranged from .34 to 1.55. These findings suggest that linking diagnostic thresholds to functional outcomes and thereby making them less arbitrary is an achievable goal. This study has introduced a new and uncomplicated way to empirically set diagnostic thresholds while also taking into consideration that items within diagnostic sets may function differently. Although purely an initial demonstration meant only to serve as an example, by using this approach, there exists the potential that diagnostic thresholds for all disorders could one day be set on an empirical basis.
438

A Low-power High-speed 8-bit Pipelining CLA Design Using Dual Threshold Voltage Domino Logic and Low-cost Digital I/Q Separator for DVB-T

Cheng, Tsai-Wen 10 July 2006 (has links)
This thesis includes two topics. One is a low-power high-speed 8-bit pipelining CLA design using dual threshold voltage (dual- Vth) domino logic. The other is a low-cost digital I/Q separator for DVB-T receivers. A high speed and low power 8-bit CLA using dual- Vth domino logic blocks arranged in a PLA-like style with pipelining is presented. According to parallely precharge and sequentially evaluate in a cascaded set of domino logic blocks, transistors in the precharge part and the evaluation part of dual- Vth domino logic are, respectively, replaced by high Vth transistors to reduce subthreshold leakage current through OFF transistors, and low Vth transistors. Moreover, an nMOS transistor is inserted in the precharge phase of the output inverter such that the two-phase dual- Vth domino logic can be properly applied in a pipeline structure. Consequently, the proposed design keeps the advantage of high speed while attaining the effect of low power dissipation. A low-cost digital I/Q separator is presented in the second part of this thesis. Using digital I/Q separator in place of the traditional analog I/Q separator guarantees the design conquer gain and phase mismatch problems between the I and Q channels. The proposed design can berealized by inverters and shifters such that the goal of low cost can be achieved.
439

none

Huang, Yi-Hsuan 27 June 2007 (has links)
With the liberalization of financial market, the prevalence of international trade and the prosperity of foreign exchange markets ,investors could hedge,speculate or interest arbitrage in markets. Therefore, market efficiency is worthy of investigation and analysis on the international finance extensively. According to simple market efficiency hypothesis, there would be a long-run relationship between spot exchange rate and forward exchange rate if the foreign exchange market is efficient. Under the circumstance, this study firstly tries to examine whether there is a long-run relationship or not between spot exchange rate and forward exchange rate by Linear Cointegration Theory. At the same time, the study tests Simple Market Efficiency Hypothesis is correct or not in practice. Next,in a non-linear threshold cointegrational way, it looks into whether there is an apparent threshold effect or not among variables, and the adjusting behavior in the long-run equilibrium process. The result of the study proves that there are an apparent threshold effect and inconsistent behaviors in the long-run equilibrium process.
440

Spatial Formation Of The Interface Between University And City / Consideration Of The Interfaces Of Ankara University And Metu In Their Own Contexts

Kose, Semra 01 September 2010 (has links) (PDF)
Universities have a significant role in society as they are generators of economic activity, as land developers, as neighbors and as property owners. Therefore it is a focal point in the community. Every university lives within a surrounding community. They have been creating their own relations with the neighborhoods. The space that the university confronts with the city is shaped according to the needs of the people from the university and the inhabitants of the area. Between the university and the city, every university creates their own interface in accordance with the location and the inhabitants of the area. While planning the city or the university the interface zone did not take into account. It has been behaved as a part of the city although it has been a neighbor with university. While designing the university there has been no attempt to design this zone or making decisions including this zone. Therefore this space creates its own character in time. As it is locating between the city and the university it has been carrying both the character of the university and the city. The main aim of this study is to examine the spatial formation of the interface of university and city in respect to the planning decisions and spatial features of the area by investigating the two different types of universities in their own contexts in Ankara / Ankara University and METU. In this context, the spatial character of interface area is defined by examining this space as a transitional area, boundary and threshold. Then universities and their historical developments are examined in urban space and the relations between these two domains are investigated through the selected universities in Europe and USA. Finally, the situation of the university in Turkey is handled and searched the formation of the interface areas around the campuses of the two selected universities in Ankara.

Page generated in 0.0275 seconds