• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 323
  • 234
  • 72
  • 39
  • 35
  • 20
  • 9
  • 6
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • Tagged with
  • 905
  • 198
  • 155
  • 126
  • 103
  • 101
  • 89
  • 79
  • 76
  • 76
  • 58
  • 53
  • 48
  • 47
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

TEACHING ADDITION THROUGH STIMULUS EQUIVALENCE TRAINING USING THE PEAK-E CURRICULUM

Macke, Greg W. 01 May 2016 (has links)
The purpose of the study was to evaluate the efficacy of the procedures described in the PEAK-E curriculum in teaching addition skills to children with developmental disabilities. In the present study, three participants were taught to match sample addition problems (A) to a number of pictures (B) corresponding to the sum of A (A-B), and to match sample pictures (B) to textual numbers (C) (B-C). They were then tested to see if they could match sample addition problems (A) to with the textual numbers (C) that were the solution of the addition problems A (A-C). Following mastery of the A-B and B-C relations, none of the participants were able to demonstrate the derived transitive A-C relation. An additional training phase was conducted across all participants whereby two of the five stimulus classes were provided “equivalence” C-A training (matching the textual number C to the equation A), after which all of the participants were able to demonstrate the derived transitive A-C relations across all stimulus classes without direct training of any of the stimulus classes. The results expand on previous research evaluating behavioral approaches to teaching math skills by showing how the development of equivalence class can result in the untrained emergence of novel math skills. Keywords: Stimulus Equivalence, PEAK, Addition, Autism, Developmental Disabilities
72

Equivalence of 2-D multi-topic category and Ana-bicategory

Reddappagari, Parama Jyothi January 2003 (has links)
No description available.
73

Utilizing novel dose equivalence methodologies to examine cocaine's effects on the vasculature

Lamarre, Neil Stanley January 2013 (has links)
ABSTRACT: UTILIZING NOVEL DOSE EQUIVALENCE METHODOLOGIES TO EXAMINE COCAINE'S EFFECTS ON THE VASCULATURE Neil S. Lamarre Doctor of Philosophy Temple University School of Medicine, 2013 Doctoral Advisory Committee Chair: Ronald J. Tallarida, Ph.D. Cocaine abuse and addiction is a serious health problem, resulting in thousands of emergency room visits and deaths each year in the United States. It is particularly toxic to the cardiovascular system, including deleterious effects on the peripheral vasculature. These effects are not well understood, but evidence suggests chronic cocaine use may lead to endothelial dysfunction, thereby increasing relative risk of a number of other cardiovascular diseases including stroke, aneurysm, myocardial infarction, hypertension, etc. Data from our lab, and others, suggest that the presence of a functional endothelium has a dramatic effect on the contractility of the rat aorta that is agonist-specific. Attenuation of this endothelium-dependent vasodilatory component of agonist action is a primary feature of endothelial dysfunction. We have utilized dose equivalence theory to calculate the dose response relationship for the endothelium-dependent vasodilatory component of an agonist causing overt vasoconstriction. This component cannot be measured directly, but our novel methodology allows us to quantitate agonist-specific impairment of vasodilation, and describe it using the familiar parameters of the dose response curve. Another strength of this method, relative to currently used in vitro methods, is that it also avoids the confounding variable of a second agonist used to produce the initial vasoconstriction. To validate the methodology, a pilot study was performed examining the endothelial dysfunction in STZ-induced diabetic rats, as a positive control for endothelial dysfunction. Interestingly, this treatment showed impairment in the endothelium-dependent vasodilatory component of action of norepinephrine, but not of angiotensin-II. Thus, our initial hypothesis was confirmed - that disruption of the vasodilatory components of various agonists are independent, and that agonist-specific information may prove useful. Next, we employed our new methodology utilizing the rat aorta as our vascular model to test the hypothesis that chronic cocaine administration causes endothelial dysfunction. We first examined the endothelium-dependent vasodilation component of a number of physiologically important vasoconstrictors, and attempted to determine which vasodilatory mediators contributed to the effect. We found the endothelium to have a profound effect on the dose response curve to three important endogenous agonists. These data suggest that under conditions of endothelial dysfunction exaggerated vasoconstriction could occur, even within normal plasma concentration ranges of these vasoconstrictors, resulting in elevated blood pressure and further damage to the endothelium over time. No endothelial dysfunction was observed with this treatment paradigm, using our methodology or the standard approach. This may be a result of insufficient duration of cocaine treatment, or a result of our selection of the rat aorta as a model. We wanted to further investigate which vasodilatory mechanisms were involved in this vasodilatory component of action. We inhibiting various endothelium-derived mediators of this vasodilatory component of action (such as nitric oxide or prostacyclin), which revealed differential activation of these mediators by the agonists examined. For example, inhibition of nitric oxide synthesis abolished the endothelium-dependent vasodilatory component of endothelin-1, but only partially attenuated that of angiotensin-II. Thus, the agonist-specific pattern of impairment may also prove useful in examining the underlying mechanisms of impaired vasodilation. Endothelial dysfunction is one reported consequence of long term cocaine abuse; however, there are conflicting reports on the acute vascular effects of cocaine, with some reports concluding that cocaine is a vasoconstrictor, and some reporting its action as a vasodilator. There are in vitro reports of cocaine causing release of vasoconstrictors from the endothelium, which supports the longstanding notion of cocaine as a vasoconstrictor. However, one recent report demonstrates a dose-dependent vasodilatory effect of cocaine in rat aorta that is independent of the endothelium. This complexity is perhaps due, in part, to cocaine's affinity for a number of molecular targets, acting in combination. In examining the acute action of cocaine in our preparation, we observed an "inverted-U" shaped dose response, also referred to as a hormetic dose response curve. We then applied dose equivalence methodology in order to derive the "unknown" second component contributing the vasodilatory action of cocaine at higher doses. This methodology lets us calculate this unknown component, and describe it with the familiar parameters of a dose response curve, which could potentially aid in the identification of the unknown component. The preliminary studies with acute cocaine utilized a sub-maximal dose of phenylephrine in order to observe tension changes in either direction. This prompted us to further characterize the interaction of cocaine with other alpha adrenoceptor agonists. Importantly, because cocaine alone had no effect at doses up to 100 µM, but potentiated the vasoconstriction of alpha agonists, the interaction is therefore synergistic. This constitutes evidence of a previously undescribed mechanism contributing to cocaine's vasoconstricting effect. In vivo, reuptake inhibition is a major mechanism for cocaine-induced vasoconstriction, but is excluded in this experiment by virtue of low levels of sympathetic innervation in the rat aorta, and the use of methoxamine, an alpha agonist not subject to the reuptake mechanisms. This interaction may contribute to cocaine-induced vasoconstriction in the coronary arteries, especially in circumstances of endothelial dysfunction. In summary, the work presented in this dissertation applies new methodologies utilizing dose equivalence theory to the study of cocaine's effects on peripheral vasculature, and presents novel findings of synergy with respect to cocaine's enhancement on the action of alpha adrenoceptor-mediated vasoconstriction. / Pharmacology
74

Static Learning for Problems in VLSI Test and Verification

Syal, Manan 01 July 2005 (has links)
Static learning in the form of logic implications captures Boolean relationships between various gates in a circuit. In the past, logic implications have been applied in several areas of electronic design automation (EDA) including: test-pattern-generation, logic and fault simulation, fault diagnosis, logic optimization, etc. While logic implications have assisted in solving several EDA problems, their usefulness has not been fully explored. We believe that logic implications have not been carefully analyzed in the past, and this lack of thorough investigation has limited their applicability in solving hard EDA problems. In this dissertation, we offer deeper insights into the Boolean relationships exhibited in a circuit, and present techniques to extract their full potential in solving two hard problems in test and verification: (1) Efficient identification of sequentially untestable stuck-at faults, and (2) Equivalence checking of sequential circuits. Additionally, for the dissertation, we define a new concept called multi-cycle path delay faults (M-pdf) for latch based designs with multiple clock domains, and propose an implications-based methodology for the identification of untestable M-pdfs for such designs. One of the main bottlenecks in the efficiency of test-pattern-generation (TPG) is the presence of untestable faults in a design. State-of-the-art automatic test pattern generators (ATPG) spend a lot of effort (in both time and memory) targeting untestable faults before aborting on such faults, or, eventually identifying these faults as untestable (if given enough computational resources). In either case, TPG is considerably slowed down by the presence of untestable faults. Thus, efficient methods to identify untestable faults are desired. In this dissertation, we discuss a number of solutions that we have developed for the purpose of untestable fault identification. The techniques that we propose are fault-independent and explore properties associated with logic implications to derive conclusions about untestable faults. Experimental results for benchmark circuits show that our techniques achieve a significant increase in the number of untestable faults identified, at low memory and computational overhead. The second related problem that we address in this proposal is that of determining the equivalence of sequential circuits. During the design phase, hardware goes through several stages of optimizations (for area, speed, power, etc). Determining the functional correctness of the design after each optimization step by means of exhaustive simulation can be prohibitively expensive. An alternative to prove functional correctness of the optimized design is to determine the design's functional equivalence w.r.t. some golden model which is known to be functionally correct. Efficient techniques to perform this process, known as equivalence checking, have been investigated in the research community. However, equivalence checking of sequential circuits still remains a challenging problem. In an attempt to solve this problem, we propose a Boolean SAT (satisfiability) based framework that utilizes logic implications for the purpose of sequential equivalence checking. Finally, we define a new concept called multi-cycle path-delay faults (M-pdfs). Traditionally, path delay faults have been analyzed for flip-flop based designs over the boundary of a single clock cycle. However, path delay faults may span multiple clock cycles, and a technique is desired to model and analyze such path delay faults. This is especially essential for latch based designs with multiple clock domains, because the problem of identifying untestable faults is more complex in such design environments. In this dissertation, we propose a three-step methodology to identify untestable M-pdfs in latch-based designs with multiple clocks using logic implications. / Ph. D.
75

Geometry of Self-Similar Sets

Roinestad, Kristine A. 22 May 2007 (has links)
This paper examines self-similar sets and some of their properties, including the natural equivalence relation found in bilipschitz equivalence. Both dimension and preservation of paths are determined to be invariant under this equivalence. Also, sophisticated techniques, one involving the use of directed graphs, show the equivalence of two spaces. / Master of Science
76

ATPG based Preimage Computation: Efficient Search Space Pruning using ZBDD

Chandrasekar, Kameshwar 06 August 2003 (has links)
Preimage Computation is a fundamental step in Formal Verification of VLSI designs. Conventional OBDD-based methods for Formal Verification suffer from spatial explosion, since large designs can blow up in terms of memory. On the other hand, SAT/ATPG based methods are less demanding on memory. But the run-time can be huge for these methods, since they must explore an exponential search space. In order to reduce this temporal explosion of SAT/ATPG based methods, efficient learning techniques are needed. Conventional ATPG aims at computing a single solution for its objective. In preimage computation, we must enumerate all solutions for the target state during the search. Similar sub-problems often occur during preimage computation that can be identified by the internal state of the circuit. Therefore, it is highly desirable to learn from these search-states and avoid repeated search of identical solution/conflict subspaces, for better performance. In this thesis, we present a new ZBDD based method to compactly store and efficiently search previously explored search-states. We learn from these search-states and avoid repeating subsets and supersets of previously encountered search spaces. Both solution and conflict subspaces are pruned based on simple set operations using ZBDDs. We integrate our techniques into a PODEM based ATPG engine and demonstrate their efficiency on ISCAS '89 benchmark circuits. Experimental results show that upto 90% of the search-space is pruned due to the proposed techniques and we are able to compute preimages for target states where a state-of-the-art technique fails. / Master of Science
77

Translation and cultural adaptation with reference to Tshivenda and English : a case study of the medical field

Mashamba, Mabula January 2011 (has links)
Thesis (M.A. (African languages)) --University of Limpopo, 2011 / The aim of this study was to investigate the problems encountered by translators when translating medical terms from English into Tshivenda. It has been revealed in this study that the major problem that the translators are confronted with is lack of terminology in the specialized field such as Health. This problem is caused by the fact that different languages entail a variety of culture. The study revealed that most translators and lexicographers resort to transliteration and borrowing when confronted with zero-equivalence. They regard transliteration and borrowing as the quickest possible strategies. The study discovered that transliteration should not be opted as an alternative strategy to deal with zero-equivalence as users will be led to a state of confusion. The study revealed that communicative translation is regarded as the most fruitful method of translation as it conveys the exact message of the original in a best possible manner. Both the source and the target users get the same message. KEY CONCEPTS Translation, Culture, Source Language (SL), Target Language (TL), Translation equivalence and Zero-equivalence.
78

Relating Relations: The Impact of Equivalence-Equivalence Training on Analogical Reasoning

Garcia, Anna Rosio 04 November 2014 (has links)
A well-researched line showing equivalence performances in a wide variety of areas has been conducted in the field of Behavior Analysis (BA). One area demonstrates that relating relations is a behavioral account of analogical thinking. Relating relations may have implications for the development of analogical training given that analogical reasoning is seen as the foundation of intelligence yet research in this area is limited. A protocol by Stewart, Barnes-Holmes, and Weil (2009) was developed to train children in analogical reasoning using equivalence-equivalence relations. The purpose of this study was to evaluate an equivalence-equivalence training protocol based on Stewart et al. (2009) and test whether the protocol was effective in training equivalence-equivalence responding to 7 and 8-year-old children. A secondary purpose was to test whether training in equivalence-equivalence responding increased performances on analogical tests. All five participants were dismissed throughout the study. Participant 1 was dismissed during the pre-assessments and all other participants were dismissed during intervention. Because none of the participants passed the equivalence-equivalence training, increases in performance in analogical testes were not analyzed. Individual performance data from training are examined and analyzed to provide an account of the failures to pass the equivalence-equivalence protocol.
79

Some aspects of style in twentieth-century English Bible translation : one-man versions of Mark and the Psalms

Sjölander, Pearl January 1979 (has links)
This is a study of the work of some seventy of the many hundreds of translators of the Bible, in whole or in part, into English during this century. Style, with particular emphasis on diction, is the major concern, though other aspects can be touched on at times, as well as methods of translation. Part one deals with versions of Mark into English prose, and part two with versions of the Psalms into English verse forms. The translations are grouped according to the aims and purposes of the translator and/or the type of language he employs. First a short passage is analysed - generally Mark 1:1-11 or Psalm 23 - and then a larger body of text is examined and the various levels of diction and phrasing are noted with examples cited of each. Some evaluation occurs, set against the criteria of comprehensibility and suitability of the style to the subject-matter, to the style of the original, and to the limitations of the intended audience. Several factors are seen to affect the style of a Bible translation, the most conspicuous being the influence of tradition, the translation method used - formal or dynamic equivalence - or the amount of restructuring necessitated by audience-orientation. The main trend this century is the gradual departure from "Biblical" English and the increased interest in the use of comprehensible contemporary language. A comparison between the versions of Mark and the Psalms shows that their translators seemed to have- different objectives. Translators of Mark were generally more interested in dynamic equivalence, some in reflecting the linguistic level of koiné Greek, and many in audience-orientation. There are also several, however, who preferred to lean toward literalism. Translators of the Psalms into verse forms were not concerned with reflecting the linguistic level but rather the prosodie features of the original Hebrew Psalms. There is less interest both in literalism, audience-orientation and in dynamic-equivalence, except perhaps in versions into rhymed verse or a few of those into free verse. The overall impression gained from this study is that style is of vital importance when it comes to the effectiveness, usefulness and impact of a translation. / digitalisering@umu
80

Warum kann für Bienen, Bienenkörbe, Immen und Bienenschwärme die gleiche Übersetzung gewählt werden? : Eine Übersetzungsanalyse anhand von Kollers (2011) Äquivalenztypen

Mårtensson, Mia January 2013 (has links)
The concept of equivalence can be said to hold a central position in translation studies. In this particular thesis, different aspects of the equivalence concept are discussed on the basis of five German source texts and their respective Swedish translations. The source texts are situated within the field of beekeeping and were first published in Deutsches Bienenjournal and translated for the Swedish magazine Bitidningen. The equivalence theory presented by the translation theorist Werner Koller (2011) forms the basis of the analysis. Koller distinguishes five different types of equivalence: denotative, connotative, text-normative, pragmatic and formal-aesthetic equivalence. Koller’s equivalence concept is presented and discussed with a selection of illustrative examples displaying translation problems which arose during the translation of the source texts. In the thesis, special attention is paid to when and how the different equivalence aspects were considered. Specifically, it is made evident that sometimes very different translation solutions can be justified depending on which equivalence aspect the translator finds the most important in the current case. In addition, it is argued that the translator has to establish a hierarchy of the most important values of equivalence in each different translation case and for each different target text in order to make the most suitable translation decision. It is also stated that Koller’s theory is very useful for the translator when making decisions in difficult cases and, furthermore, that all of Koller’s five types of equivalence were relevant for the translation decision-making process.

Page generated in 0.0536 seconds