• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 398
  • 176
  • 67
  • 37
  • 29
  • 28
  • 24
  • 24
  • 15
  • 11
  • 8
  • 8
  • 6
  • 4
  • 4
  • Tagged with
  • 952
  • 113
  • 101
  • 86
  • 80
  • 64
  • 61
  • 60
  • 55
  • 55
  • 48
  • 47
  • 45
  • 45
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

[en] DURATION AND VOLATILITY MODELS FOR STOCK MARKET DATA / [pt] MODELOS DE DURAÇÃO E VOLATILIDADE PARA DADOS INTRADIÁRIOS DO MERCADO FINANCEIRO

SAVANO SOUSA PEREIRA 12 January 2005 (has links)
[pt] O presente trabalho visa generalizar a modelagem do tempo entre os negócios ocorridos no mercado financeiro, doravante chamado duração, e estudar os impactos destas duraçõoes sobre a volatilidade instântanea. O estudo foi realizado por meio do modelo linear ACD (autoregression conditional duration) proposto por Engel e Russel[3], os quais usaram a distribuição Exponencial e Weibull para as inovações, e o modelo GARCH-t para dados com alta freqüência para modelar a volatilidade instântanea, também usando a proposição de Engel e Russel[3]. A generalização faz uso da Gama Generalizada proposta em Zhang, Russel & Tsay[9] em um modelo de duração não linear conhecido como TACD (threshold autoregressive conditional duration). A justificativa para o estudo das durações com a Gama Generalizada é obter uma modelo mais flexível que o proposto por Engel e Russel[3]. Os resultados do modelo ACD com as inovações seguindo uma Gama Generalizada se mostrou mais adequado capturando a sub-dispersão dos dados. A seguir estimamos o modelo de volatilidade instantânea usando as durações estimadas como variáveis explicativas encontrando resultados compatíveis com a literatura. / [en] This work generalizes the duration model, the time elapsed between two consecutive transactions, such as financial transactions data; and explores the consequences of durations in the instantaneous volatility. The approach have been motivated by Engel and Russel[3], that proposed an autoregressive conditional duration (ACD) model to explain the variation of volatility, where the innovations proposes were Exponential andWeibull distributions. Besides they used the GARCH-t to modeling the instantaneous volatility in high frequency data. This work uses the Generalized Gamma to the innovations in order to generalize the ACD model, this distribution has been first suggested by Zhang, Russel and Tsay[9], in the threshold ACD (TACD) framework. We justify the generalized Gamma specification in order to allow for more flexibility than the ACD model of Engel and Russel[3]. We find evidences that the ACD model with this specification was better to captur the behavior such as sub- dispersion.
222

Party duration : examining the effects of incumbent party tenure on election outcomes

Thomas, Jason John 01 July 2015 (has links)
What consequences arise as a result of repeated control of the legislature by the same party or coalition? Are incumbent parties less likely to lose an election the longer they remain in power? Furthermore, as parties remain in power longer and longer, do the factors which electoral scholars have proposed influence elections have less of an impact on election outcomes? The purpose of this project is to examine the electoral impact of repeated control of the legislature by the same party or ruling coalition. In this project, I argue that the length of time an incumbent party or coalition has maintained control of the legislature is a critical consideration for scholars interested in studying elections. In doing so, I hope to develop a better understanding of elections, the factors which influence election, and the mechanisms by which these factors affect election outcomes. Central to this project is the phenomenon I call party duration. I define party duration as the number of years the incumbent party has maintained control of the legislature in unicameral legislatures or the lower house in bicameral legislatures. This is the party that has secured enough seats to control the legislature independently in cases where a single party controls the legislature, or the party that serves as the largest party in the ruling coalition that controls the legislature in cases where a single party does not control the legislature by itself. Using cross-sectional time-series analysis to study a novel dataset, I show that not only does increasing party duration decreases the likelihood that an incumbent party will lose an election, controlling for various other factors, but I find evidence that party duration also affects the effect of other variables which influence elections. Specifically, I focus on the impact that the length of party duration has on the effect of economic conditions on the incumbent party's performance in elections. These findings highlight the importance of party duration, a variable which has previously not received attention from electoral scholars
223

Comparative study of near-infrared pulsed laser machining of carbon fiber reinforced plastics

Heiderscheit, Timothy Donald 15 December 2017 (has links)
Carbon fiber-reinforced plastics (CFRPs) have gained widespread popularity as a lightweight, high-strength alternative to traditional materials. The unique anisotropic properties of CFRP make processing difficult, especially using conventional methods. This study investigates laser cutting by ablation as an alternative by comparing two near-infrared laser systems to a typical mechanical machining process. This research has potential applications in the automotive and aerospace industries, where CFRPs are particularly desirable for weight savings and fuel efficiency. First, a CNC mill was used to study the effects of process parameters and tool design on machining quality. Despite high productivity and flexible tooling, mechanical drilling suffers from machining defects that could compromise structural performance of a CFRP component. Rotational feed rate was shown to be the primary factor in determining the axial thrust force, which correlated with the extent of delamination and peeling. Experimental results concluded that machining quality could be improved using a non-contact laser-based material removal mechanism. Laser machining was investigated first with a Yb:YAG fiber laser system, operated in either continuous wave or pulse-modulated mode, for both cross-ply and woven CFRP. For the first time, energy density was used as a control variable to account for changes in process parameters, predicting a logarithmic relationship with machining results attributable to plasma shielding effects. Relevant process parameters included operation mode, laser power, pulse overlap, and cross-ply surface fiber orientation, all of which showed a significant impact on single-pass machining quality. High pulse frequency was required to successfully ablate woven CFRP at the weave boundaries, possibly due to matrix absorption dynamics. Overall, the Yb:YAG fiber laser system showed improved performance over mechanical machining. However, microsecond pulses cause extensive thermal damage and low ablation rates due to long laser-material interaction time and low power intensity. Next, laser machining was investigated using a high-energy nanosecond-pulsed Nd:YAG NIR laser operating in either Q-Switch or Long Pulse mode. This research demonstrates for the first time that keyhole-mode cutting can be achieved for CFRP materials using a high-energy nanosecond laser with long-duration pulsing. It is also shown that short-duration Q-Switch mode results in an ineffective cutting performance for CFRP, likely due to laser-induced optical breakdown. At sufficiently high power intensity, it is hypothesized that the resulting plasma absorbs a significant portion of the incoming laser energy by the inverse Bremsstrahlung mechanism. In Long Pulse mode, multi-pass line and contour cutting experiments are further performed to investigate the effect of laser processing parameters on thermal damage and machined surface integrity. A logarithmic trend was observed for machining results, attributable to plasma shielding similar to microsecond fiber laser results. Cutting depth data was used to estimate the ablation threshold of Hexcel IM7 and AS4 fiber types. Drilling results show that a 2.2 mm thick cross-ply CFRP panel can be cut through using about 6 laser passes, and a high-quality machined surface can be produced with a limited heat-affected zone and little fiber pull-out using inert assist gas. In general, high-energy Long Pulse laser machining achieved superior performance due to shorter pulse duration and higher power intensity, resulting in significantly higher ablation rates. The successful outcomes from this work provide the key to enable an efficient high-quality laser machining process for CFRP materials.
224

Effect of Short Duration Grazing on Soil Moisture Depletion and Plant Water Status in a Crested Wheatgrass Pasture

Wraith, Jon M. 01 May 1986 (has links)
A short duration grazing system was utilized to determine the effects of intensive periodic defoliation during spring on soil moisture depletion patterns and plant water status in a crested wheatgrass (Agropvron cristatum and A. desertorum) pasture in central Utah. Exclosures were constructed to compare grazed and ungrazed responses. Soil moisture was monitored to a depth of 193 cm at one to two week intervals from mid-April to late-September using a neutron moisture gauge. Predawn and midday leaf water potentials were estimated using a pressure chamber technique. The two paddocks included in the study were grazed three times between mid-April and mid-June in 1985. A difference in time of grazing between the two paddocks was also examined for its effect on soil moisture depletion patterns and plant water status. Soil moisture was depleted at a higher rate within ungrazed plots than grazed plots during 13 April to 1 July in both paddocks. Soil moisture was depleted at a higher rate after 1 July in grazed compared to ungrazed plots in the early-grazed paddock; however, no difference in soil moisture depletion rate was noted after 1 July within the late-grazed paddock. Total cumulative depletion was greater within ungrazed plots than grazed plots in the early-grazed paddock from 6 June until 13 August, and from 23 May until 30 July in the late-grazed paddock. During the pre-July period, soil moisture was depleted more rapidly in the upper- and mid-portions of the soil profile in ungrazed plots. By 25 September there was no difference in total soil water depletion ' through 53 cm between grazed and ungrazed treatments, but ungrazed plots extracted relatively more water in the mid- and lower-portions of the soil profile. Grazing had no effect on predawn leaf water potentials prior to 1 July, but predawn leaf water potentials were lower for ungrazed plants than for grazed plants after 1 July. Midday leaf water potentials were lower for grazed plants than for ungrazed plants before 1 July, but did not differ between grazed and ungrazed plants after 1 July. Time of grazing had no effect on either predawn or midday leaf water potentials.
225

The Concurrent Development Scheduling Problem (CDSP)

Paul, Leroy W 27 October 2005 (has links)
The concurrent development (CD) project is defined as the concurrent development of both hardware and software that is integrated together later for a deliverable product. The CD Scheduling Problem (CDSP) is defined as most CD baseline project schedules being developed today are overly optimistic. That is, they finish late. This study researches those techniques being used today to produce CD project schedules and looks for ways to close the gap between the baseline project schedule and reality. In Chapter 1, the CDSP is introduced. In Chapter 2, a review is made of published works. A review is also made of commercial scheduling software applications to uncover their techniques as well as a review of organizations doing research on improving project scheduling. In Chapter 3, the components of the CDSP are analyzed for ways to improve. In Chapter 4, the overall methodology of the research is discussed to include the development of the Concurrent Development Scheduling Model (CDSM) that quantifies the factors driving optimism. The CDSM is applied to typical CD schedules with the results compared to Monte Carlo simulations of the same schedules. The results from using the CDSM on completed CD projects are also presented. The CDSM does well in predicting the outcome. In Chapter 5, the results of the experiments run to develop the CDSM are given. In Chapter 6 findings and recommendations are given. Specifically, a list of findings is given that a decision maker can use to analyze a baseline project schedule and assess the schedules optimism. These findings will help define the risks in the CD schedule. Also included is a list of actions that the decision maker may be able to take to reduce of the risk of the project to improve the chances of coming in on time.
226

Efficient duration modelling in the hierarchical hidden semi-Markov models and their applications

Duong, Thi V. T. January 2008 (has links)
Modeling patterns in temporal data has arisen as an important problem in engineering and science. This has led to the popularity of several dynamic models, in particular the renowned hidden Markov model (HMM) [Rabiner, 1989]. Despite its widespread success in many cases, the standard HMM often fails to model more complex data whose elements are correlated hierarchically or over a long period. Such problems are, however, frequently encountered in practice. Existing efforts to overcome this weakness often address either one of these two aspects separately, mainly due to computational intractability. Motivated by this modeling challenge in many real world problems, in particular, for video surveillance and segmentation, this thesis aims to develop tractable probabilistic models that can jointly model duration and hierarchical information in a unified framework. We believe that jointly exploiting statistical strength from both properties will lead to more accurate and robust models for the needed task. To tackle the modeling aspect, we base our work on an intersection between dynamic graphical models and statistics of lifetime modeling. Realizing that the key bottleneck found in the existing works lies in the choice of the distribution for a state, we have successfully integrated the discrete Coxian distribution [Cox, 1955], a special class of phase-type distributions, into the HMM to form a novel and powerful stochastic model termed as the Coxian Hidden Semi-Markov Model (CxHSMM). We show that this model can still be expressed as a dynamic Bayesian network, and inference and learning can be derived analytically. / Most importantly, it has four superior features over existing semi-Markov modelling: the parameter space is compact, computation is fast (almost the same as the HMM), close-formed estimation can be derived, and the Coxian is flexible enough to approximate a large class of distributions. Next, we exploit hierarchical decomposition in the data by borrowing analogy from the hierarchical hidden Markov model in [Fine et al., 1998, Bui et al., 2004] and introduce a new type of shallow structured graphical model that combines both duration and hierarchical modelling into a unified framework, termed the Coxian Switching Hidden Semi-Markov Models (CxSHSMM). The top layer is a Markov sequence of switching variables, while the bottom layer is a sequence of concatenated CxHSMMs whose parameters are determined by the switching variable at the top. Again, we provide a thorough analysis along with inference and learning machinery. We also show that semi-Markov models with arbitrary depth structure can easily be developed. In all cases we further address two practical issues: missing observations to unstable tracking and the use of partially labelled data to improve training accuracy. Motivated by real-world problems, our application contribution is a framework to recognize complex activities of daily livings (ADLs) and detect anomalies to provide better intelligent caring services for the elderly. / Coarser activities with self duration distributions are represented using the CxHSMM. Complex activities are made of a sequence of coarser activities and represented at the top level in the CxSHSMM. Intensive experiments are conducted to evaluate our solutions against existing methods. In many cases, the superiority of the joint modeling and the Coxian parameterization over traditional methods is confirmed. The robustness of our proposed models is further demonstrated in a series of more challenging experiments, in which the tracking is often lost and activities considerably overlap. Our final contribution is an application of the switching Coxian model to segment education-oriented videos into coherent topical units. Our results again demonstrate such segmentation processes can benefit greatly from the joint modeling of duration and hierarchy.
227

in|form: the performative object: the exploration of body, motion and form

Newrick, Tiffany Rewa January 2008 (has links)
Through the sculptural object, this thesis, in|form: The per formative object, explores the relationships between body and object, viewer and artist, performance and the per formative. By exploring the performativity of an object (and questioning how an object performs in relation to the body), the documented performances activate an inter-relational act between artist and object (I perform the object, the object performs me simultaneously). The work that unfolds from this investigation considers the placement of the viewer’s body in relation to the artist’s. A dialogue is formed between the three bodies: object, artist and viewer, creating a sense of embodiment within the work through this relationship. in|form explores this embodiment through the role of video documentation. The performances are constructed to be viewed solely through the documentation, which creates a discussion between the ‘live’ moment and the documented event, and how the viewer then relates to this. The performances take place as solo acts, but are constructed with the viewer in mind. As the viewer watches the documented performance of the action between artist and object in space, the relational nature of the work creates a second performance which embodies the viewer. This sole action, recorded and then viewed, considers the relational value of the body, specifically engaging with the abstraction of bodily formlessness within the object to reveal a bodily nature. Using the object to trace the movement of the body creates a language that communicates to, and about, both viewer and artist: through the awareness of passing time, through the large scale projection of the documentation, through the bodily nature of the object, and through the performativity of the object’s responsive nature to the artist’s body as the pair navigate through space. in|form explores how the absence of the body (in a literal sense) considers the body as an object bound by time, at once physical yet transient. By tracing the motion of the body through object, the viewer experiences the body through sensibility. Ultimately, the function of the body negotiating as a time-bound object is imitated through the performativity of the object with artist, and the elusiveness of this action emphasized by its documentation.
228

財務比率均衡性之衡量-台灣上市公司之實証研究 / Measuring Equilibrating Forces of Financial Ratios - Empirical Study on Taiwan Stock Market

楊麗芬, Yang, Li Fen Unknown Date (has links)
本文以國內九十二家股票上市公司為樣本,利用一階差分序列相關係數、卡方適合度檢定、Mann-Whitney-U檢定及Pearson積差相關係數等統計方法,探討樣本公司財務比率自民國七十年至八十二年的調整過程,以提供報表使用者有關財務比率特性方面的資訊。本研究中的主要研究問題如下:   (一)測試短期償債能力、每股盈餘、財務結構、經營效能及獲利能力等五類二十二項財務比率,以決定其是否有均衡值或呈隨機走勢的現象。若比率有均衡值時,則衡量該項比率從非均衡狀況回復到均衡值的速度。   (二)因為均衡力量可能會受到企業規模的影響,故測試大小公司財務比率之調整速度是否有顯著的差異。   (三)將總調整率區分為產業調整率和管理調整率二部份,測試管理調整率是否顯著地大於產業調整率,並評估其在總調整率中的相對權數。   (四)測試財務比率的歷史值是否可用於預測未來的比率值。   本研究的主要結論可分成下列五部份說明之:   (一)財務比率之調整係數方面:除了經營效能方面的財務比率調整係數大多為零外,其餘各類財務比率的調整係數均大於零,顯示這些比率有均衡值存在。其中涉及短期流動性項目的財務比率調整係數比涉及長期固定性項目的財務比率調整係數大。   (二)規模效果:除了少數財務比率外,大小公司的調整係數差異不大,並無明顯的規模效果。   (三)產業效果與管理效果:平均而言,管理當局的調整速度比產業的調整速度快,且二者在總調整率中均佔有重要的比例。   (四)預測能力:只有三項財務比率達到顯著水準,接受歷史比率可用於預測未來比率值的假設。   (五)總結:在國外財務比率均衡性的研究文獻中,同一類比率僅以一、二項作研究。而本文研究的結果,顯示,同一類比率之β<sub>大公司</sub>、β<sub>小公司</sub>、β<sub>M</sub>及β<sub>I</sub>點估計值不盡相同,有的差異頗大。故同類中仍應多選取幾項較具代表性的比率,以使研究結果更可靠。
229

Comparison of on-pond measurement and back calculation of odour emission rates from anaerobic piggery lagoons

Galvin, Geordie January 2005 (has links)
Odours are emitted from numerous sources and can form a natural part of the environment. The sources of odour range from natural to industrial sources and can be perceived by the community dependant upon a number of factors. These factors include frequency, intensity, duration, offensiveness and location (FIDOL). Or in other words how strong an odour is, at what level it becomes detectable, how long it can be smelt for, whether or not the odour is an acceptable or unacceptable smell as judged by the receptor (residents) and where the odour is smelt. Intensive livestock operations cover a wide range of animal production enterprises, with all of these emitting odours. Essentially, intensive livestock in Queensland, and a certain extent Australia, refers to piggeries, feedlots and intensive dairy and poultry operations. Odour emissions from these operations can be a significant concern when the distance to nearby residents is small enough that odour from the operations is detected. The distance to receptors is a concern for intensive livestock operations as it may hamper their ability to develop new sites or expand existing sites. The piggery industry in Australia relies upon anaerobic treatment to treat its liquid wastes. These earthen lagoons treat liquid wastes through degradation via biological activity (Barth 1985; Casey and McGahan 2000). As these lagoons emit up to 80 per cent of the odour from a piggery (Smith et al., 1999), it is imperative for the piggery industry that odour be better quantified. Numerous methods have been adopted throughout the world for the measurement of odour including, trained field sniffers, electronic noses, olfactometry and electronic methods such as gas chromatography. Although these methods all have can be used, olfactometry is currently deemed to be the most appropriate method for accurate and repeatable determination of odour. This is due to the standardisation of olfactometry through the Australian / New Zealand Standard for Dynamic Olfactometry and that olfactometry uses a standardised panel of "sniffers" which tend to give a repeatable indication of odour concentration. This is important as often, electronic measures cannot relate odour back to the human nose, which is the ultimate assessor of odour. The way in which odour emission rates (OERs) from lagoons are determined is subject to debate. Currently the most commonly used methods are direct and indirect methods. Direct methods refer to placing enclosures on the ponds to measure the emissions whereas indirect methods refer to taking downwind samples on or near a pond and calculating an emission rate. Worldwide the odour community is currently divided into two camps that disagree on how to directly measure odour, those who use the UNSW wind tunnel or similar (Jiang et al., 1995; Byler et al., 2004; Hudson and Casey 2002; Heber et al., 2000; Schmidt and Bicudo 2002; Bliss et al., 1995) or the USEPA flux chamber (Gholson et al., 1989; Heber et al., 2000; Feddes et al., 2001; Witherspoon et al., 2002; Schmidt and Bicudo 2002; Gholson et al., 1991; Kienbusch 1986). The majority of peer reviewed literature shows that static chambers such as the USEPA flux chamber under predict emissions (Gao et al., 1998b; Jiang and Kaye 1996) and based on this, the literature recommends wind tunnel type devices as the most appropriate method of determining emissions (Smith and Watts 1994a; Jiang and Kaye 1996; Gao et al., 1998a). Based on these reviews it was decided to compare the indirect STINK model (Smith 1995) with the UNSW wind tunnel to assess the appropriateness of the methods for determining odour emission rates for area sources. The objective of this project was to assess the suitability of the STINK model and UNSW wind tunnel for determining odour emission rates from anaerobic piggery lagoons. In particular determining if the model compared well with UNSW wind tunnel measurements from the same source; the overall efficacy of the model; and the relationship between source footprint and predicted odour emission rate.
230

Midseason cold tolerance screening for the NSW rice improvement program

John Smith Unknown Date (has links)
The current rice varieties grown by Australian farmers are susceptible to low temperature events particularly during the reproductive stage of plant development. The best management practices of sowing within the recommended time period and maintaining deep water (20–25 cm) through the microspore development stage only offer limited protection. There is a need to develop more cold tolerant varieties and to do so requires the development of low-temperature screening capacity for the NSW rice breeding program. This study looked at the requirements of adapting a controlled-temperature glasshouse facility to enable screening for tolerance to low temperatures during the reproductive stage of rice development. The investigations were grouped into two areas; 1) the physical aspects of the low temperature facility including the location of plants within the facility and within the tubs used to grow the plants and whether these can influence the reliability of the screening and 2) the biological effects of nitrogen (N) concentration in the plant at panicle initiation (PI) and plant susceptibility to low temperatures, and whether growth stage of the plant relative to PI at the start of low temperature treatment influenced floret sterility. A series of nine experiments were conducted at the Deniliquin Agricultural Research and Advisory Station glasshouse facility using up to five rice varieties selected for their divergence in low-temperature tolerance. One other experiment was conducted in a different facility. The modified glasshouse facility in Deniliquin was effective in providing the targeted screening environment of 27°C day and 13°C night temperature regime. There was however a smaller than expected effect of the low temperature exposure in some of the experiments with sterility following low temperature ranging from 9.9% to 27.7%. There was also a higher than expected level of sterility in the controls (i.e. not exposed to low temperature) with sterility levels up to 58% recorded in one experiment. The causes of these overall effects are not known. Notwithstanding these overall effects there were a number of findings that are important for developing a reliable screening facility. The spatial arrangement of the plants within the low temperature facility effected the level of sterility highlighting the need for experimental design to consider spatial variation. The existence of edge effects was identified within the tubs used to maintain permanent water on the potted plants whereby the outer plants in the tubs were less damaged by the low-temperature treatment. The overall N level in the leaf tissue was low even at the equivalent rate of 250 kg N ha-1 and there was only a very modest and inconsistent response in N concentration at PI to N application rates ranging from 0 to 250 kg ha-1. However, the method of growing the plants in pots and of nitrogen fertiliser application did not alter the N concentration. The concentration was the same when N was added either, on the same day as permanent water application, or three days prior to permanent water application. Also restricting the direction of water movement through the pots and therefore the soil within the pots did not alter the amount of N in the plants at PI. The low plant N concentrations were consistent across two glasshouses in which soil from the same source was used suggesting a soil limitation. A soil test identified that the soil phosphorus (P) was at a level at which plants have responded to P application under field conditions, and the loamy texture of the soil had an associated low cation exchange capacity in comparison to medium to heavy clay soil types commonly associated with rice growing. These factors may have reduced the N retention and uptake and, in part, explain the low injury from the low temperature exposures. In the variety Millin, there was no significant effect of timing of the exposure on sterility until it commenced 12 to 15 days after PI. This is a significant finding for a breeding program that must expose lines of unknown phenological development. It means that even though there are small differences in the rates of development, there is no large effect of this on sterility. However, this response was not seen in the other varieties tested and thus requires further validation. It was difficult to induce repeatable levels of floret sterility in this series of experiments most likely due to the low N concentrations in part due to the properties of the soil used to grow the plants. This highlights the importance of standardising all cultural aspects in order to provide uniform and repeatable screening information. The spatial effects highlight importance of experimental design for effective exposure to low temperature treatments, incorporation of the capacity for spatial analysis in the statistical design, the use of standard variety checks for floret sterility after low temperature treatment, and the determination of N concentration in plant tissue prior to exposure.

Page generated in 0.1965 seconds