• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 655
  • 196
  • 121
  • 119
  • 53
  • 28
  • 26
  • 25
  • 20
  • 20
  • 19
  • 12
  • 11
  • 10
  • 8
  • Tagged with
  • 1570
  • 771
  • 239
  • 202
  • 170
  • 162
  • 155
  • 151
  • 147
  • 139
  • 121
  • 108
  • 103
  • 101
  • 97
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Kritische Würdigung unterschiedlicher Budgetierungsansätze

Gruner, Stefan. January 2006 (has links) (PDF)
Bachelor-Arbeit Univ. St. Gallen, 2006.
52

Input-output analysis and the study of economic and environmental interactions

Victor, Peter Alan January 1971 (has links)
This thesis is an attempt to apply the technique of input-output analysis to the study of the relations between an economy and the environment which supports it. The opening chapter contains a brief justification of the use of input-output analysis for this purpose. It is argued that input-output models, which recognise many of the interactions among consumers and producers, can be extended so that they also take account of some of the interactions among consumers, producers, and the natural environment. Emphasis is placed upon the flow of materials between the environment and the economy. Waste products flow from the economy to the environment and 'free' goods flow in the opposite direction. There follows, in the second chapter, a review of the work of three writers who have explored the possibility of using general equilibrium and input-output models to study man's impact on the environment. The models presented by these economists are each found to possess unsatisfactory features. The theoretical core of the dissertation is an adaptation of two recently developed input-output models. Waste products and 'free' goods are introduced into both models in several different ways. The data requirements of the various models differ considerably and only the simplest of the models can be applied to the data on waste products and 'free' goods that are currently available. Canadian data, much of which were collected especially for this study, and the methods used in its estimation, are described in the fourth chapter. Chapter five is a summary of the results obtained from using the data on waste products and 'free' goods in conjunction with the Canadian input-output accounts for 1961. These results include estimates of the wastes produced and 'free' goods used in the production and consumption of one dollar's worth of each type of commodity manufactured in Canada. The results also include estimates of the Provincial distribution of waste products and 'free' goods that were associated with Canadian economic activity in 1961. Furthermore, an attempt is made to rank the commodities produced and consumed in Canada, in terms of the relative impact on the environment of their production and consumption. The final experiment illustrates a method of estimating the ecologic implications of changing the pattern of Canadian consumption. To show this an estimate is made of the effects of transferring 50 per cent of Canadian passenger car travel to public transportation. The last chapter of the thesis is a discussion of the uses to which the models and results might be put in formulating Government policy. Various methods are examined of bringing the production of wastes and use of 'free' goods within the realm of the market economy. It is argued that although it is generally more efficient to price the wastes and 'free' goods directly this policy can only serve as a long term goal. In the short term it is suggested that, for administrative reasons, emphasis should be placed on levying taxes on commodities so that their market prices reflect the ecologic cost of their production and consumption. A schedule of the relative sizes of such taxes is estimated using a model developed for the purpose together with the data collected as part of this study. In conclusion, the overall purpose of the dissertation is to suggest a method of analysis rather than to present comprehensive results. The results which are obtained are intended to be no more than indicative of what would be possible if more accurate and comprehensive data were available. / Arts, Faculty of / Vancouver School of Economics / Graduate
53

Health Risk Feedback: The Effects of ACE Insight on Stress Reactivity

Rued, Heidi Anna January 2018 (has links)
Exposure to adverse childhood experiences (ACEs) has lasting repercussions throughout an individual’s lifetime. An adult with a history of childhood trauma is at increased risk for excessive stress reactivity, which exacerbates the development of chronic disease. It is important to investigate how this information can be used for adult trauma survivors. This study assessed the psychophysiological impacts of providing “ACE insight”. Participants completed questionnaires and were given false feedback that their childhood experiences put them at increased risk for excessive stress reactivity and the development of disease. Following ACE insight, participants underwent a speech stressor task during which cardiovascular reactivity was monitored and psychological reactions were assessed. Results indicated that participants with more adverse childhoods reported feeling more worried and less happy about feedback. Further, ACE insight caused a significant increase in cardiac output for participants with a history of childhood trauma. Implications and future directions are discussed.
54

Response to GENOVATE output

GENOVATE partner institutions, Ford, Jackie M. 11 1900 (has links)
Yes / Response to GENOVATE output with Professor Jackie Ford at the GENOVATE conference. / FP7
55

Efficient Computation of Regularities in Strings and Applications

Yusufu, Munina 08 1900 (has links)
<p> Regularities in strings model many phenomena and thus form the subject of extensive mathematical studies. Perhaps the most conspicuous regularities in strings are those that manifest themselves in the form of repeated subpatterns, that is, repeats, multirepeats, repetitions, runs and others. Regularities in the form of repeating substrings were the basis of one of the earliest and still widely used compression algorithms and remain central in more recent approaches. Repeats and repetitions of lengthy substrings in DNA and protein sequences are important markers in biological research. </p> <p> A large proportion of the available algorithms for computing regularities in strings depends on the prior computation of a suffix tree, or, more recently, of a suffix array. The design of new algorithms for computing regularities should emphasize conceptual simplicity, as well as both time and space efficiency.</p> <p> In this thesis, we investigate mathematical and algorithmical aspects of the computation of regularities in strings.</p> <p> The first part of the thesis is the development of space and time efficient nonextendible (NE) and supernonextendible (SNE) repeats algorithms RPT, shown to be more efficient than previous methods based on tests using different real data sets. In particular, we describe four variants of a new fast algorithm RPT1 that, based on suffix array construction, computes all the complete NE repeats in a given string x whose length (period) p ≥ pmin, where pmin ≥ 1 is a user-specified minimum. RPT1 uses 5n bytes of space directly, but requires the LCP array, whose construction needs 6n bytes. The variants RPT1-3 and RPT1-4 execute in O(n) time independent of alphabet size and are faster than the two other algorithms previously proposed for this problem. To provide a basis of comparison for RPT1, we also describe a straightforward algorithm RPT2 that computes complete NE repeats without any recourse to suffix arrays and whose total space requirement is only 5n bytes; however, this algorithm is slower than RPT1. Furthermore, we describe new fast algorithms RPT3 for computing all complete SNE repeats in x. Of these, RPT3-2 executes in O(n) time independent of alphabet size, thus asymptotically faster than the methods previously proposed. We conclude with a brief discussion of applications to bioinformatics and data compression.</p> <p> The second part of the thesis deals with the issue of finding the NE multirepeats in a set of N strings of average length n under various constraints. A multirepeat is a repeat that occurs at least m times (m ≥ 2) in each of at least q ≥ 1 strings in a given set of strings. We show that RPT1 can be extended to locate the multirepeats based on the investigation of the properties of the multirepeats and various strategies. We describe algorithms to find complete NE multirepeats, first with no restriction on "gap length" (that is, the gap between occurrences of the multirepeat), then with bounded gaps. For the first problem, we propose two algorithms with worst-case time complexities O(Nn+αlog2N) and O(Nn+α) that use 9Nn and 10Nn bytes of space, respectively, where α is the alphabet size. For the second problem, we describe an algorithm with worst-case time complexity O(RNn) that requires approximately 10Nn bytes, where R is the number of multirepeats output. We remark that if we set the min and max constraints on gaps equal to zero in this algorithm, we can find all repetitions (tandem repeats) in arbitrary subsets of a given set. We demonstrate that our algorithms are faster, more flexible and much more space efficient than algorithms recently proposed for this problem.</p> <p> Finally, the third part of the thesis provides a convenient framework for comparing the LZ factorization algorithms which are used in the computation of regularities in strings rather than in the traditional application to text compression. LZ factorization is the computational bottleneck in numerous string processing algorithms, especially in regularity studies, such as computing repetitions, runs, repeats with fixed gap, branching repeats, sequence alignment, local periods, and data compression. Since 1977, when Ziv and Lempel described a kind of string factorization useful for data compression, there has been a succession of algorithms proposed for computing "LZ factorization". In particular, there have been several recent algorithms proposed that extend the usefulness of LZ factorization, especially to the computation of runs in a string x. We choose these algorithms and analyze each algorithm separately, and remark on them by comparing some of their important aspects, for example, additional space required and handling mechanism. We also address their output format differences and some special features. We then provide a complete theoretical comparison of their time and space efficiency. We conduct intensive testing on both time and space performance and analyze the results carefully to draw conclusions in which situations these algorithms perform best. We believe that our investigation and analysis will be very useful for researchers in their choice of the proper LZ factorization algorithms to deal with the problems related to computation of the regularities in strings.</p> / Thesis / Doctor of Philosophy (PhD)
56

Asynchronous Digital Multiplexing

Ojeda, Carlos F. 01 January 1972 (has links) (PDF)
No description available.
57

Pasture Intake, Digestibility and Fecal Kinetics in Grazing Horses

Holland, Janice Lee 11 March 1998 (has links)
Pasture intake of grazing livestock needs to be estimated to allow determination of energy and nutrient intakes. It is commonly estimated by difference, subtracting intakes of other feeds from estimated needs for dry matter or energy. However, these estimates are often erroneous, because they do not take individual animal variation for growth, reproductive status or activity level into account. One method that has had success in grazing ruminants has been the use of markers, or tracers, to estimate fecal output and nutrient digestibility. External markers are dosed to the animal and can be used to determine fecal output. Internal markers are an inherent part of the diet in question and can be used to determine dry matter and nutrient digestibilities. These estimates can then be used to give estimates of intake. These studies were conducted to evaluate the effectiveness of traditional marker methods in determining fecal output, digestibility, and thus intake in grazing horses. The first trial was conducted on 8 mature mares and geldings, housed in stalls, to determine if a common external marker, Cr, could be used to determine fecal output. Horses were dosed once daily with a molasses, Cr, and hay mixture for 12 d. Feces were collected throughout the day into individual tubs so that total fecal output (TC) could be measured. Daily fecal Cr excretion values (Ct, mg/kg DM) were fit to a monoexponential equation with one rate constant (k), rising to an asymptote (Ca): Ct = Ca - Ca.e-kt. Superior fits were found when a delay (d) was incorporated into the equation, estimating the time required for Cr to enter the prefecal pool: Ct = Ca - Ca.e-k(t-d). Estimates of fecal output (FO) were calculated using the equation: FO = Cr dose-d / Ca and provided good estimates when compared to TC values. Subsequent trials evaluated to use of internal markers and more frequent dosing of Cr to improve estimates of intake. Eight mature geldings were housed in stalls and were fed 2 hays in a replicated Latin Square design. The monoexponential equation with the delay continued to fit the data well. Thrice daily dosing of Cr improved the predictions of FO, when dosing was every 8 h. The internal marker, yttrium (Y) consistently overestimated digestibility (D). The internal markers, n-alkanes, gave a better estimate of digestibility. When the digestibility estimates were combined with the FO estimates to estimate dry matter intake (DMI, kg/d): DMI = [FO / (1-D)]*100, the combination including n-alkanes gave better estimates. Further studies found that dosing Cr for 12 d did not improve the fit of the monoexponential equation compared to dosing for only 8 d. Marker methods that had been developed in stalls were applied to grazing horses, and results continued to be promising. / Ph. D.
58

Determination of cardiac output across a range of values in horses by M-mode echocardiography and thermodilution

Moore, Donna Preston 15 March 2004 (has links)
Determinations of cardiac output (CO) by M-mode echocardio-graphy were compared with simultaneous determinations by thermodilution in 2 conscious and 5 anesthetized horses. A range of cardiac outputs was induced by use of a pharmacological protocol (dopamine, 4 ug/kg/min, dobutamine, 4 ug/kg/min, and 10 ug/kg detomidine plus 20 ug/kg butorphanol, in sequence). Changes from baseline CO in response to each drug were evaluated, and data was analyzed to determine whether there were any interactions between drug treatment and measurement method. The mathematical relationship between CO as determined by M-mode echocardio-graphy (COecho) and as determined by thermodilution (COTD) was described and used to predict COTD from COecho. The 2 methods were compared with respect to bias and variability in order to determine the suitability of COecho as a substitute for COTD . Sources of the variability for each method were determined. Determination of CO by either method in standing horses was prohibitively difficult due to patient movement. The pharmacologi-cal protocol was satisfactory for inducing a range of cardiac outputs for the purpose of method comparison; however, use of dopamine did not offer any additional benefit over the use of dobutamine and was generally less reliable for increasing CO. Inclusion of detomidine provided an additional change in CO but did not increase the overall range of CO over that produced by halothane and dobutamine. COecho and COTD were significantly related by the predictive equation COTD = (0.63 +/- 0.157) x COecho + (16.6 +/- 3.22). The relatively large standard errors associated with COecho measurements resulted in a broad 95% prediction interval such that COecho would have to change by more than 100% in order to be 95% confident that the determined value represents true hemodynamic change. COecho underestimated COTD by a mean of 10 +/- 6.3 l/min/450 kg. The large standard deviation of the bias resulted in broad limits of agreement (-22.3 to +2.3 l/min/450 kg). Measurement-to-measurement variability accounted for 28% of the total variation in COTD values and 64% of the total variation in COecho values. Results might be improved if the mean of 3-5 consecutive beats was used for each measurement, but as determined in this experiment, COecho is too variable to have confidence in its use for precise determinations of CO. / Master of Science
59

Modeling The Economic Impact of A Farming Innovation Group On A Regional Economy - A Top-Down Versus Hybrid Input-Output Approach

Gangemi, Michael Andrew, michael.gangemi@rmit.edu.au January 2008 (has links)
This thesis involves construction of input-output models measuring the economic impact of a farming innovation organisation (The Birchip Cropping Group) on the Victorian regional economy of Buloke Shire. The input-output modeling undertaken is of two forms; the first being a simple naïve top-down model, and the second a more sophisticated hybrid model. The naïve top-down model is based on input-output coefficients drawn from the Australian national input-output tables, and is regarded as naïve because these input-output coefficients are not adjusted to take account of local economic factors. The hybrid model uses the same national input-output coefficients as a base, and then modifies these coefficients to better reflect industrial conditions in the Shire using a location quotients-adjustment technique, as well as using original survey data collected from entities operating in Buloke Shire. One of the aims of the thesis is to determine whether the simpler naïve top-down approach produces results consistent with the theoretically more accurate hybrid methodology, and thus whether the naïve top-down approach represents a reliable method of conducting regional economic impact analysis. That is, can such studies be undertaken accurately using a naïve top down approach, or is it necessary to adopt the more resource intensive methodology of a hybrid model. The results of the analysis suggest construction of a hybrid model is advisable, as generally the naïve top-down approach produces over-estimates of the economic effects of the Birchip Cropping Group. That is, it appears the economic impact multipliers estimated with the naïve top-down model are too large, suggesting the time and effort involved in constructing the hybrid model was worthwhile. Using the hybrid model, the conclusion is that the Birchip Cropping Group has a significant affect on the regional economy of Buloke Shire, with the economic impact being estimated at close to $600,000 in additional output, $61,000 in additional income, and 3.5 additional jobs per year.
60

Konstrukce a využití časových input-output tabulek v kontextu dynamizovaného input-output modelu / The Construction and Use of the Time Input-Output Tables in context of the Semi-dynamic Input-Output Model

Zbranek, Jaroslav January 2011 (has links)
The aim of the dissertation thesis is to develop a methodology of the compilation of symmetric Time Input-Output tables under the conditions of the Czech Republic. The following aim is to create an input-output model which is based on the compiled symmetric Time Input-Output tables. For the practical applications of this model it is crucial to link the created Input-Output model with the Semi-Dynamic Input-Output model. Semi-Dynamic Input-Output model in the conception of the submitted dissertation thesis takes into account several multiplier effects and presents more comprehensive tool for the use of the Input-Output analyses in this way. The first chapter focuses on the development of the issues linked to the Input-Output tables and analyses on the territory of the Czech Republic and in the world as well. The second chapter which is also theoretical is focused on mapping of different kinds of Input-Output analyses which are done in the world using Physical, Time or Hybrid Input-Output tables. The third chapter is a purely methodological because here it is described the methodology of the compilation of symmetric Time Input-Output tables as well as methodological approach to the various sensitivity analyses. The fourth chapter focuses on the creation Semi-Dynamic Input-Output model and on the formal linking with the Input-Output model based on the Time Input-Output tables. The last fifth chapter is focused analytically. Methods described in the third chapter are applied on the official published data on the Czech economy. The analytical chapter is perceived in the submitted dissertation thesis as a tool for the sensitivity analysis in the sense of validation of the quality of the compiled Time Input-Output tables.

Page generated in 0.0209 seconds