• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 517
  • 276
  • 126
  • 103
  • 63
  • 53
  • 33
  • 19
  • 18
  • 12
  • 10
  • 6
  • 6
  • 5
  • 5
  • Tagged with
  • 1358
  • 257
  • 180
  • 140
  • 129
  • 118
  • 116
  • 115
  • 111
  • 111
  • 100
  • 97
  • 95
  • 94
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Estudo experimental das relações entre Kerma no ar e equivalente de dose ambiente para o cálculo de barreiras primárias em salas radiológicas. / Experimental study of the relationship between air Kerma and ambient dose equivalent for the calculation of primary radiological barriers in rooms.

Santos, Josilene Cerqueira 07 August 2013 (has links)
A manutenção dos níveis de dose abaixo dos limites exigidos pelas normas nacionais e internacionais é essencial em todas as aplicações das radiações ionizantes. Os níveis de restrição de dose no Brasil são estabelecidos utilizando a grandeza equivalente de dose ambiente, H* (10) e na prática de levantamentos radiométricos, os níveis de radiação são calculados por meio de medições com câmaras de ionização utilizando a grandeza kerma no ar convertidas para equivalente de dose ambiente por urn coeficiente constante. 0 presente trabalho tern por objetivo o estudo experimental das relações entre kerma no ar e a grandeza operacional equivalente de dose ambiente, pela medição de feixes de raios X transmitidos através de materiais utilizados em salas radiológicas dedicadas à exames de tórax. Uma metodologia experimental, baseada em técnicas de espectroscopia, foi desenvolvida para a medição dos espectros de raios X. Com os resultados, as estimativas da grandeza equivalente de dose ambiente, obtidas através de coeficientes de conversão entre o kerma no ar e esta grandeza, tornam-se mais realistas por levar em consideração as alterações espectrais decorrentes da atenuação dos feixes primários por objetos simuladores antropomórficos e por diferentes materiais atenuadores. Foi encontrada uma diferença máxima de 53,52% entre esses coeficientes e aquele adotado no Brasil por meio da ANVISA (1,14 Sv/Gy), o que indica uma subestimação desse valor. 0 comportamento espectral dos feixes de raios X transmitidos por barreiras primárias e atenuadores presentes em procedimentos radiológicos apresentou influencia sobre resultados relacionados a levantamentos radiométricos e procedimento de cálculos de barreiras. / The maintenance of dose levels below the limits required by national and international standards are essential in all applications of ionizing radiation. The dose constrains levels in Brazil are established in terms of the quantity ambient dose equivalent, H*(10), while the radiation levels in radiometric surveys are calculated by means of measurements with ion chambers using the quantity air-kerma converted to ambient dose equivalent by a constant factor. The present work aims the experimental study of the relationship between the air­ kerma and the operational quantity ambient dose equivalent, by measuring X-ray beams transmitted through materials used in dedicated chest radiographic facility. An experimental methodology, based on spectroscopic techniques was developed for the X-ray spectra measurements. With the results, estimates of ambient dose equivalent quantity through conversion factors between this quantity and the air-kerma, become more realistic by taking into account the spectral changes resulting from the attenuation of primary beams by anthropomorphic phantoms and different attenuating materials. The maximum difference founded between these coefficients and the one adopted in Brazil by ANVISA (1.14 Sv I Gy) was 53.52%, which indicates underestimation of the value. The spectral behavior of the X-ray beam transmitted by primary barriers and attenuators present in radiological procedures presented influences on results related to radiometric surveys and procedure calculations barriers.
192

Catalytic mechanisms of thymidylate synthases: bringing experiments and computations together

Wang, Zhen 01 May 2012 (has links)
The relationship between protein structure, motions, and catalytic activity is an evolving perspective in enzymology. An interactive approach, where experimental and theoretical studies examine the same catalytic mechanism, is instrumental in addressing this issue. We combine various techniques, including steady state and pre-steady state kinetics, temperature dependence of kinetic isotope effects (KIEs), site-directed mutagenesis, X-ray crystallography, and quantum mechanics/molecular mechanics (QM/MM) calculations, to study the catalytic mechanisms of thymidylate synthase (TSase). Since TSase catalyzes the last step of the sole intracellular de novo synthesis of thymidylate (i.e. the DNA base T), it is a common target for antibiotic and anticancer drugs. The proposed catalytic mechanism for TSase comprises a series of bond cleavages and formations including activation of two C-H bonds: a rate-limiting C-H→C hydride transfer and a faster C-H→O proton transfer. This provides an excellent model system to examine the structural and dynamic effects of the enzyme on different C-H cleavage steps in the same catalyzed reaction. Our experiments found that the KIE on the hydride transfer is temperature independent while the KIE on the proton transfer is temperature dependent, implying the protein environment is better organized for H-tunneling in the former. Our QM/MM calculations revealed that the hydride transfer has a transition state (TS) that is invariable with temperature while the proton transfer has multiple subsets of TS structures, which corroborates with our experimental results. The calculations also suggest that collective protein motions rearrange the network of H-bonds to accompany structural changes in the ligands during and between chemical transformations. These computational results not only illustrate functionalities of specific protein residues that reconcile many previous experimental observations, but also provide guidance for future experiments to verify the proposed mechanisms. In addition, we conducted experiments to examine the importance of long-range interactions in TSase-catalyzed reaction, using both kinetic and structural analysis. Those experiments found that a remote mutation affects the hydride transfer by disrupting concerted protein motions, and Mg2+ binds to the surface of TSase and affects the hydride transfer at the interior active site. Both our experiments and computations have exposed interesting features of ecTSase that can potentially provide new targets for antibiotic drugs targeting DNA biosynthesis. The relationship between protein structure, motions, and catalytic activity learned from this project may have general implications to the question of how enzymes work.
193

Quantum Mechanical Studies of Charge Assisted Hydrogen and Halogen Bonds

Nepal, Binod 01 May 2016 (has links)
This dissertation is mainly focused on charge assisted noncovalent interactions specially hydrogen and halogen bonds. Generally, noncovalent interactions are only weak forces of interaction but an introduction of suitable charge on binding units increases the strength of the noncovalent bonds by a several orders of magnitude. These charge assisted noncovalent interactions have wide ranges of applications from crystal engineering to drug design. Not only that, nature accomplishes a number of important tasks using these interactions. Although, a good number of theoretical and experimental studies have already been done in this field, some fundamental properties of charge assisted hydrogen and halogen bonds still lack molecular level understanding and their electronic properties are yet to be explored. Better understanding of the electronic properties of these bonds will have applications on the rational design of drugs, noble functional materials, catalysts and so on. In most of this dissertation, comparative studies have been made between charge and neutral noncovalent interactions by quantum mechanical calculations. The comparisons are primarily focused on energetics and the electronic properties. In most of the cases, comparative studies are also made between hydrogen and halogen bonds which contradict the long time notion that the H-bond is the strongest noncovalent interactions.Besides that, this dissertation also explores the long range behavior and directional properties of various neutral and charge assisted noncovalent bonds.
194

Adapting SafeMedicate (Medication Dosage Calculation Skills software) For Use In Brazil

Ozorio Dutra, Samia Valeria 25 June 2018 (has links)
Medication related errors are a significant cause of morbidity and mortality. In Brazil, most errors are related to prescribing, preparing, and administering medications. One way to deal with this barrier to safe care is through assessment and education of medication calculation dosage skills. Considering the Brazilian reality, this dissertation is a context and language adaptation of an evidence-based intervention called safeMedicate, a program that reinforces learning synthesis in crucial elements of medication dosage problem solving and provides the foundation for development in remaining levels of the hierarchy of learning. A guideline for medication calculation skills development or improvement based on the seven research-based principles for smart teaching was developed. Teaching approaches are beneficial for multiple methods of learning by addressing cognitive, motivational, and developmental goals. Web-based software would be a strong ally on adopting those approaches by complementing the class practice and providing opportunities for practice learning. The two-phases of adaptation and preliminary evaluation of safeMedicate for use in Brazil were guided by the Participatory and Iterative Process Framework for Language Adaptation (PIPFLA) cross-cultural equivalence model. A triangulation method of face validity survey, journaling, and multiple focus groups was used. The focus groups were (1) language adaptation team, (2) panel of experts, and (3) student panel. In order to analyze focus group data, a systematic coding procedure was performed through an iterative process, solving any differences between coders in order to guarantee internal consistency. The main themes were language, visual, content, programing, and data while discussing necessary adaptations of safeMedicate for use in Brazil.
195

Semantische Repräsentation, obligatorische Aktivierung und verbale Produktion arithmetischer Fakten / Semantic representation, obligatory activation, and verbal production of arithmetic facts

Domahs, Frank January 2006 (has links)
Die vorliegende Arbeit widmet sich der Repräsentation und Verarbeitung arithmetischer Fakten. Dieser Bereich semantischen Wissens eignet sich unter anderem deshalb besonders gut als Forschungsgegenstand, weil nicht nur seine einzelne Bestandteile, sondern auch die Beziehungen dieser Bestandteile untereinander außergewöhnlich gut definierbar sind. Kognitive Modelle können also mit einem Grad an Präzision entwickelt werden, der in anderen Bereichen kaum je zu erreichen sein wird. Die meisten aktuellen Modelle stimmen darin überein, die Repräsentation arithmetischer Fakten als eine assoziative, netzwerkartig organisierte Struktur im deklarativen Gedächtnis zu beschreiben. Trotz dieser grundsätzlichen Übereinstimmung bleibt eine Reihe von Fragen offen. In den hier vorgestellten Untersuchungen werden solche offene Fragen in Hinsicht auf drei verschiedene Themenbereiche angegangen: 1) die neuroanatomischen Korrelate 2) Nachbarschaftskonsistenzeffekte bei der verbalen Produktion sowie 3) die automatische Aktivierung arithmetischer Fakten. In einer kombinierten fMRT- und Verhaltensstudie wurde beispielsweise der Frage nachgegangen, welche neurofunktionalen Entsprechungen es für den Erwerb arithmetischer Fakten bei Erwachsenen gibt. Den Ausgangspunkt für diese Untersuchung bildete das Triple-Code-Modell von Dehaene und Cohen, da es als einziges auch Aussagen über neuroanatomische Korrelate numerischer Leistungen macht. Das Triple-Code-Modell geht davon aus, dass zum Abruf arithmetischer Fakten eine „perisylvische“ Region der linken Hemisphäre unter Einbeziehung der Stammganglien sowie des Gyrus angularis nötig ist (Dehaene & Cohen, 1995; Dehaene & Cohen, 1997; Dehaene, Piazza, Pinel, & Cohen, 2003). In der aktuellen Studie sollten gesunde Erwachsene komplexe Multiplikationsaufgaben etwa eine Woche lang intensiv üben, so dass ihre Beantwortung immer mehr automatisiert erfolgt. Die Lösung dieser geübten Aufgaben sollte somit – im Gegensatz zu vergleichbaren ungeübten Aufgaben – immer stärker auf Faktenabruf als auf der Anwendung von Prozeduren und Strategien beruhen. Hingegen sollten ungeübte Aufgaben im Vergleich zu geübten höhere Anforderungen an exekutive Funktionen einschließlich des Arbeitsgedächtnisses stellen. Nach dem Training konnten die Teilnehmer – wie erwartet – geübte Aufgaben deutlich schneller und sicherer beantworten als ungeübte. Zusätzlich wurden sie auch im Magnetresonanztomografen untersucht. Dabei konnte zunächst bestätigt werden, dass das Lösen von Multiplikationsaufgaben allgemein von einem vorwiegend linkshemisphärischen Netzwerk frontaler und parietaler Areale unterstützt wird. Das wohl wichtigste Ergebnis ist jedoch eine Verschiebung der Hirnaktivierungen von eher frontalen Aktivierungsmustern zu einer eher parietalen Aktivierung und innerhalb des Parietallappens vom Sulcus intraparietalis zum Gyrus angularis bei den geübten im Vergleich zu den ungeübten Aufgaben. So wurde die zentrale Bedeutung von Arbeitsgedächtnis- und Planungsleistungen für komplexe ungeübte Rechenaufgaben erneut herausgestellt. Im Sinne des Triple-Code-Modells könnte die Verschiebung innerhalb des Parietallappens auf einen Wechsel von quantitätsbasierten Rechenleistungen (Sulcus intraparietalis) zu automatisiertem Faktenabruf (linker Gyrus angularis) hindeuten. Gibt es bei der verbalen Produktion arithmetischer Fakten Nachbarschaftskonsistenzeffekte ähnlich zu denen, wie sie auch in der Sprachverarbeitung beschrieben werden? Solche Effekte sind nach dem aktuellen „Dreiecksmodell“ von Verguts & Fias (2004) zur Repräsentation von Multiplikationsfakten erwartbar. Demzufolge sollten richtige Antworten leichter gegeben werden können, wenn sie Ziffern mit möglichst vielen semantisch nahen falschen Antworten gemeinsam haben. Möglicherweise sollten demnach aber auch falsche Antworten dann mit größerer Wahrscheinlichkeit produziert werden, wenn sie eine Ziffer mit der richtigen Antwort teilen. Nach dem Dreiecksmodell wäre darüber hinaus sogar der klassische Aufgabengrößeneffekt bei einfachen Multiplikationsaufgaben (Zbrodoff & Logan, 2004) auf die Konsistenzverhältnisse der richtigen Antwort mit semantisch benachbarten falschen Antworten zurückzuführen. In einer Reanalyse der Fehlerdaten von gesunden Probanden (Campbell, 1997) und einem Patienten (Domahs, Bartha, & Delazer, 2003) wurden tatsächlich Belege für das Vorhandensein von Zehnerkonsistenzeffekten beim Lösen einfacher Multiplikationsaufgaben gefunden. Die Versuchspersonen bzw. der Patient hatten solche falschen Antworten signifikant häufiger produziert, welche die gleiche Zehnerziffer wie das richtigen Ergebnisses aufwiesen, als ansonsten vergleichbare andere Fehler. Damit wird die Annahme unterstützt, dass die Zehner- und die Einerziffern zweistelliger Zahlen separate Repräsentationen aufweisen – bei der Multiplikation (Verguts & Fias, 2004) wie auch allgemein bei numerischer Verarbeitung (Nuerk, Weger, & Willmes, 2001; Nuerk & Willmes, 2005). Zusätzlich dazu wurde in einer Regressionsanalyse über die Fehlerzahlen auch erstmalig empirische Evidenz für die Hypothese vorgelegt, dass der klassische Aufgabengrößeneffekt beim Abruf von Multiplikationsfakten auf Zehnerkonsistenzeffekte zurückführbar ist: Obwohl die Aufgabengröße als erster Prädiktor in das Modell einging, wurde diese Variable wieder verworfen, sobald ein Maß für die Nachbarschaftskonsistenz der richtigen Antwort in das Modell aufgenommen wurde. Schließlich wurde in einer weiteren Studie die automatische Aktivierung von Multiplikationsfakten bei gesunden Probanden mit einer Zahlenidentifikationsaufgabe (Galfano, Rusconi, & Umilta, 2003; Lefevre, Bisanz, & Mrkonjic, 1988; Thibodeau, Lefevre, & Bisanz, 1996) untersucht. Dabei sollte erstmals die Frage beantwortet werden, wie sich die automatische Aktivierung der eigentlichen Multiplikationsergebnisse (Thibodeau et al., 1996) zur Aktivierung benachbarter falscher Antworten (Galfano et al., 2003) verhält. Ferner sollte durch die Präsentation mit verschiedenen SOAs der zeitliche Verlauf dieser Aktivierungen aufgeklärt werden. Die Ergebnisse dieser Studie können insgesamt als Evidenz für das Vorhandensein und die automatische, obligatorische Aktivierung eines Netzwerkes arithmetischer Fakten bei gesunden, gebildeten Erwachsenen gewertet werden, in dem die richtigen Produkte stärker mit den Faktoren assoziiert sind als benachbarte Produkte (Operandenfehler). Dabei führen Produkte kleiner Aufgaben zu einer stärkeren Interferenz als Produkte großer Aufgaben und Operandenfehler großer Aufgaben zu einer stärkeren Interferenz als Operandenfehler kleiner Aufgaben. Ein solches Aktivierungsmuster passt gut zu den Vorhersagen des Assoziationsverteilungsmodells von Siegler (Lemaire & Siegler, 1995; Siegler, 1988), bei dem kleine Aufgaben eine schmalgipflige Verteilung der Assoziationen um das richtige Ergebnis herum aufweisen, große Aufgaben jedoch eine breitgipflige Verteilung. Somit sollte die vorliegende Arbeit etwas mehr Licht in bislang weitgehend vernachlässigte Aspekte der Repräsentation und des Abrufs arithmetischer Fakten gebracht haben: Die neuronalen Korrelate ihres Erwerbs, die Konsequenzen ihrer Einbindung in das Stellenwertsystem mit der Basis 10 sowie die spezifischen Auswirkungen ihrer assoziativen semantischen Repräsentation auf ihre automatische Aktivierbarkeit. Literatur Campbell, J. I. (1997). On the relation between skilled performance of simple division and multiplication. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 1140-1159. Dehaene, S. & Cohen, L. (1995). Towards an anatomical and functional model of number processing. Mathematical Cognition, 1, 83-120. Dehaene, S. & Cohen, L. (1997). Cerebral pathways for calculation: double dissociation between rote verbal and quantitative knowledge of arithmetic. Cortex, 33, 219-250. Dehaene, S., Piazza, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20, 487-506. Domahs, F., Bartha, L., & Delazer, M. (2003). Rehabilitation of arithmetic abilities: Different intervention strategies for multiplication. Brain and Language, 87, 165-166. Galfano, G., Rusconi, E., & Umilta, C. (2003). Automatic activation of multiplication facts: evidence from the nodes adjacent to the product. Quarterly Journal of Experimental Psychology A, 56, 31-61. Lefevre, J. A., Bisanz, J., & Mrkonjic, L. (1988). Cognitive arithmetic: evidence for obligatory activation of arithmetic facts. Memory and Cognition, 16, 45-53. Lemaire, P. & Siegler, R. S. (1995). Four aspects of strategic change: contributions to children's learning of multiplication. Journal of Experimental Psychology: General, 124, 83-97. Nuerk, H. C., Weger, U., & Willmes, K. (2001). Decade breaks in the mental number line? Putting the tens and units back in different bins. Cognition, 82, B25-B33. Nuerk, H. C. & Willmes, K. (2005). On the magnitude representations of two-digit numbers. Psychology Science, 47, 52-72. Siegler, R. S. (1988). Strategy choice procedures and the development of multiplication skill. Journal of Experimental Psychology: General, 117, 258-275. Thibodeau, M. H., Lefevre, J. A., & Bisanz, J. (1996). The extension of the interference effect to multiplication. Canadian Journal of Experimental Psychology, 50, 393-396. Verguts, T. & Fias, W. (2004). Neighborhood Effects in Mental Arithmetic. Psychology Science. Zbrodoff, N. J. & Logan, G. D. (2004). What everyone finds: The problem-size effect. In J. I. D. Campbell (Hrsg.), Handbook of Mathematical Cognition (pp.331-345). New York, NY: Psychology Press. / The present thesis deals with the representation and processing of arithmetic facts. This domain of semantic knowledge has gained a substantial amount of interest as its components as well as their interrelations are well specified. Thus, cognitive models can be developed with a degree of precision, which cannot be reached in many other domains. Most recent models agree that arithmetic facts are represented in an associative, network-like structure in declarative memory. Despite this general agreement a lot of issues still remain unresolved. The open questions tackled in the present work address three different aspects of arithmetic facts: 1) their neuro-anatomical correlates, 2) neighbourhood consistency effects in their verbal production and 3) their automatic activation. In a combined behavioural and fMRI study the neurofunctional correlates of the acquisition of arithmetic facts in adults were examined. This research was based on the Triple-Code-Model of Dehaene and Cohen, the only recent model which makes explicit assumptions on neuroanatomical correlates of numerical abilities. The Triple-Code-Model assumes that a “perisylvian” region in the left hemisphere including the basal ganglia and the Angular Gyrus is involved in the retrieval of arithmetic facts (Dehaene & Cohen, 1995; Dehaene & Cohen, 1997; Dehaene, Piazza, Pinel, & Cohen, 2003). In the present study healthy adults were asked to train complex multiplication problems extensively during one week. Thus, these problems could be solved more and more automatically. It was reasoned that answering these trained problems should more and more rely on the retrieval of facts from declarative memory, whereas answering untrained problems should rely on the application of strategies and procedures, which impose high demands on executive functions including working memory. After the training was finished, participants – as expected – could solve trained problems faster and more accurately than non-trained problems. Participants were also submitted to a functional magnetic resonance imaging examination. In general, this examination added to the evidence for a mainly left hemispheric fronto-parietal network being involved in mental multiplication. Crucially, comparing trained with non-trained problems a shift of activation from frontal to more parietal regions was observed. Thus, the central role of central executive and working memory for complex calculation was highlighted. Moreover, a shift of activation from the Intraparietal Sulcus to the Angular Gyrus took place within the parietal lobe. According to the Triple-Code-Model, this shift may be interpreted to indicate a strategy change from quantity based calculation, relying on the Intraparietal Sulcus, to fact retrieval, relying on the left Angular Gyrus. Are there neighbourhood consistency effects in the verbal production of arithmetic facts similar to what has been described for language production? According to the “Triangle Model” of simple multiplication, proposed by Verguts & Fias (2004), such effects can be expected. According to this model corrects answers can be given more easily if they share digits with many semantically close wrong answers. Moreover, it can be assumed that wrong answers, too, are more likely to be produced if they share a digit with the correct result. In addition to this, the Triangle Model also states that the classical problem size effect in simple multiplication (Zbrodoff & Logan, 2004) can be drawn back to neighbourhood consistency between the correct result and semantically close wrong answers. In fact, a re-analysis of error data from a sample of healthy young adults (Campbell, 1997) and a patient with acalculia (Domahs, Bartha, & Delazer, 2003) provided evidence for the existence of decade consistency effects in the verbal production of multiplication results. Healthy participants and the patient produced significantly more wrong answers which shared the decade digit with the correct result than otherwise comparable wrong answers. This result supports the assumption of separate representations of decade and unit digits in two-digit numbers in multiplication (Verguts & Fias, 2004) and in number processing in general (Nuerk, Weger, & Willmes, 2001; Nuerk & Willmes, 2005). Moreover, an additional regression analysis on the error rates provided first empirical evidence for the hypothesis that the classical problem size effect in the retrieval of multiplication facts may be an artefact of neighbourhood consistency: Although problem size was the first variable to enter the model, it was excluded from the model once a measure for neighbourhood consistency was included. Finally, in a further study the automatic activation of multiplication facts was examined in a number matching task (Galfano, Rusconi, & Umilta, 2003; Lefevre, Bisanz, & Mrkonjic, 1988; Thibodeau, Lefevre, & Bisanz, 1996). This experiment addressed the question how the automatic activation of actual multiplication results (Thibodeau et al., 1996) relates to the activation of semantically close wrong answers (Galfano et al., 2003). Furthermore, using different SOAs the temporal properties of these activations should be disclosed. In general, the results of this study provide evidence for an obligatory and automatic activation of a network of arithmetic facts in healthy educated adults in which correct results are stronger associated with the operands than semantically related wrong answers. Crucially, products of small problems lead to stronger interference effects than products of larger problems while operand errors of large problems lead to stronger interference effects than operand errors of small problems. Such a pattern of activation is in line with predictions of Siegler’s Distribution of Associations Model (Lemaire & Siegler, 1995; Siegler, 1988) which assumes a more peaked distribution of associations between operands and potential results for small compared to large multiplication problems. In sum, the present thesis should shed some light into largely ignored aspects of arithmetic fact retrieval: The neural correlates of its acquisition, the consequences of its implementation in the base 10 place value system, as well as the specific effects of its semantic representation for automatic activation of correct multiplication facts and related results. References Campbell, J. I. (1997). On the relation between skilled performance of simple division and multiplication. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 1140-1159. Dehaene, S. & Cohen, L. (1995). Towards an anatomical and functional model of number processing. Mathematical Cognition, 1, 83-120. Dehaene, S. & Cohen, L. (1997). Cerebral pathways for calculation: double dissociation between rote verbal and quantitative knowledge of arithmetic. Cortex, 33, 219-250. Dehaene, S., Piazza, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20, 487-506. Domahs, F., Bartha, L., & Delazer, M. (2003). Rehabilitation of arithmetic abilities: Different intervention strategies for multiplication. Brain and Language, 87, 165-166. Galfano, G., Rusconi, E., & Umilta, C. (2003). Automatic activation of multiplication facts: evidence from the nodes adjacent to the product. Quarterly Journal of Experimental Psychology A, 56, 31-61. Lefevre, J. A., Bisanz, J., & Mrkonjic, L. (1988). Cognitive arithmetic: evidence for obligatory activation of arithmetic facts. Memory and Cognition, 16, 45-53. Lemaire, P. & Siegler, R. S. (1995). Four aspects of strategic change: contributions to children's learning of multiplication. Journal of Experimental Psychology: General, 124, 83-97. Nuerk, H. C., Weger, U., & Willmes, K. (2001). Decade breaks in the mental number line? Putting the tens and units back in different bins. Cognition, 82, B25-B33. Nuerk, H. C. & Willmes, K. (2005). On the magnitude representations of two-digit numbers. Psychology Science, 47, 52-72. Siegler, R. S. (1988). Strategy choice procedures and the development of multiplication skill. Journal of Experimental Psychology: General, 117, 258-275. Thibodeau, M. H., Lefevre, J. A., & Bisanz, J. (1996). The extension of the interference effect to multiplication. Canadian Journal of Experimental Psychology, 50, 393-396. Verguts, T. & Fias, W. (2004). Neighborhood Effects in Mental Arithmetic. Psychology Science. Zbrodoff, N. J. & Logan, G. D. (2004). What everyone finds: The problem-size effect. In J. I. D. Campbell (Ed.), Handbook of Mathematical Cognition (pp.331-345). New York, NY: Psychology Press.
196

Molekülmechanische und quantenchemische Berechnung der räumlichen und elektronischen Struktur von Vanadium(IV)- und Oxo-Rhenium(V)-Chelaten dreizähnig diacider Liganden

Jäger, Norbert January 1998 (has links)
In dieser Arbeit wurden die Molekülstrukturen und die elektronischen Eigenschaften von Vanadium(IV)- und Oxo-Rhenium(V)-Chelaten mit einem kombinierten molekülmechanisch-quantenchemischen Ansatz untersucht, um sterische und elektronische Effekte der Komplexierung mit einem theoretischen Modell zu quantifizieren. Es konnte gezeigt werden, daß auf diese Weise detaillierte Aussagen zu den Bindungsverhältnissen der Metallchelate getroffen werden können. Die Berechnung der Molekülstrukturen gelingt mit exzellenter Übereinstimmung mit den Kristallstrukturen der Komplexe. Die molekülmechanischen Berechnungen erfolgen auf der Grundlage des Extensible Systematic Force Field ESFF und des Consistent Force Field 91 (CFF91). Dabei konnte die hohe Flexibilität und Zuverlässigkeit des regelbasierten ESFF für eine Vielzahl verschiedenster Metallchelate nachgewiesen werden. Aufgrund der mangelhaften Ergebnisse für trigonal-prismatische Komplexgeometrien mit dem ESFF wurden eine Anpassung des CFF91 für derartige Vanadiumkomplexe vorgenommen. Auf Grundlage von theoretischen Ergebnissen wurden die alternativen Strukturen von isoelektronischen Vanadiumkomplexen berechnet und in Übereinstimmung mit experimentellen Daten, theoretischen Modellen der Komplexchemie und empirischen Fakten eine Hypothese für die Ursache der strukturellen Differenzen erarbeitet.<br> Der hier vorgestellte, kombinierte Algorithmus aus kraftfeldbasierter Geometrieoptimierung und single-point-Rechnung an diesen Strukturen ist ein zuverlässiger und relativ schneller Weg Molekülgeometrien von Metallkomplexen zu berechnen. Er kann somit zur Voraussagen von Komplexstrukturen und zur gezielten Modellierung definierter Koordinationsgeometrien verwendet werden. / In this work the molecular structures and the electronic properties of Vanadium(IV)- and Oxo-Rhenium(V)-chelates have been investigated to quantify steric and electronic effects of complexation. It has been shown, that in this way detailed insight can be gained into the bonding conditions of that metal complexes. Molecular mechanic calculations based on the Extensible Systematic Force Field (ESFF) and the Consistent Force Field 91 (CFF91) have been carried out. High flexibility and reliability of the rule based ESFF has been proven for a large variety of different metal chelates. Due to the poor ESFF-results for trigonal-prismatic complex geometries, a fit of the CFF91 for that species was done. Based on the theoretical results the alternative structure of isoelectronical vanadium(IV)- complexes have been calculated and a hypothesis on the reason for the structural differnces have been stated in accordance with experimental results, theoretical models of complex chemistry, and empirical facts. This combined approach of force field based geometry optimization and single point calculation at these structures has been proven to be a reliable and fast way to get molecular structures of metal complexes. It can be used to predict complex structures for modelling destinct coordination geometries.
197

Omkostningskalkulation for avancerede produktionsomgivelser : en sammenligning af stokastiske og deterministiske omkostningskalkulationsmodeller

Nielsen, Steen January 1996 (has links)
Hvordan kan en omkostningskalkulationsmodel udformes under moderne og fleksible produktionsforudsætninger, og hvordan påvirker stokastikken fra produktionen en given kalkulationsmodel, når der tages højde for samtlige indsatte ressourcer fra produktionen? Disse forhold er diskuteret med udgangspunkt i den existerende kalkulationsteori på området og i relation till to konkrete case-virksomheder. For at kunne gøre konkrete beregninger af stokastikkens virkninger, er der udformet en model baseret på et FMS-system, som har været testet via stokastisk simulering. Resultatet heraf viste, at variationer i processerne, transport og leadtid kan have relativ stor effekt på stykomkostningerne sammenliget med det deterministiske tilfælde. Med en stokastisk omkostningsmodel er der også mulighed for, at estimere effekten fra Design For Manufacturability (DFM) via standardafvigelsen. Dermed bliver det muligt att søge efter at minimere stokastikken og variationen fra produktionen. / Diss. Stockholm : Handelshögskolan
198

An analysis of arctic seabird trophic levels and foraging locations using stable isotopes

Moody, Allison Theresa 14 May 2007
Arctic ecosystems are vulnerable to human-induced changes such as increases in contaminant levels and climatic warming. To predict effects of these changes, it is important to understand trophic relationships among Arctic organisms and how they change in response to time and environmental perturbations. Seabird diet can reflect relative availability and abundance of planktivorous fish and zooplankton in remote areas. The measurement of naturally occurring stable isotopes presents an alternative approach to evaluating dietary patterns of seabirds at both an individual level and at a larger, ecosystem level. Relative changes in δ13C values provided an indication of changes in consumption of benthic vs. pelagic prey and changes in δ15N values provided an indication of changes in trophic level.<p>I investigated trophic positions of four seabirds (Thick-billed Murres (<i>Uria lomvia</i>); Northern Fulmars (<i>Fulmarus glacialis</i>); Black-legged Kittiwakes (<i>Rissa tridactyla</i>); and Glaucous Gulls (<i>Larus hyperboreus</i>)) at Prince Leopold Island, Nunavut, 1988 2003, using my own and previously measured stable isotope measurements in blood samples. Trophic level and space use among years differed within and among species and may be related to ice conditions and species-specific foraging strategies. The species with the most flexible foraging methods, Thick-billed Murres, varied their foraging location and trophic level the most. In 2002, fewer chicks than average for all species were fledged and Thick-billed Murre chicks were lighter than in other years; however, only murres showed a concurrent decrease in the proportion of fish in their diet. Adult body condition of murres in 2002 was positively correlated with trophic level. Breeding season dietary patterns of Thick-billed Murre adults and chicks were examined on Coats Island, Nunavut, Canada, in 2004. Adult trophic level increased slightly through the breeding season and δ13C values indicated a switch from benthic to pelagic foraging locations. Chick and adult murres did not differ in either δ15N or δ13C values; however, within a family (two parents, one chick), chicks were fed at or slightly below adult trophic level. I found little variation in stable isotope values which suggests adult murres did not preferentially select prey for either themselves or their chicks. <p>Finally, stable isotope analysis was used to investigate winter foraging ecology of three species of alcids (Thick-billed Murres; Common Murres (<i>U. aalge</i>) and Razorbills (<i>Alca torda</i>)), off Newfoundland, Canada, 1996 2004. Thick-billed Murres fed at a higher trophic level than Common Murres. Razorbill δ15N values were highly variable and overlapped those of both murre species. I found no significant differences in δ13C values among the three species confirming a common spatial feeding pattern. Both murre species became depleted in 13C during winter suggesting foraging location or prey species shifted from nearshore to offshore. For Common Murres, hatching-year individuals fed at a higher trophic level and foraged farther offshore than after-hatch year birds. For Thick-billed Murres, I contrasted trophic level determined for the breeding colony at Prince Leopold Island with those determined for winter over four years and found considerable inter-annual variation in patterns of seasonal difference in trophic level. However, the proportion of lower trophic level (amphipod) vs. higher trophic level (fish) prey was generally greater in the winter than the summer.
199

Beam Modelling for Treatment Planning of Scanned Proton Beams / Strålmodellering i dosplaneringssyfte för svepta protonstrålar

Kimstrand, Peter January 2008 (has links)
Scanned proton beams offer the possibility to take full advantage of the dose deposition properties of proton beams, i.e. the limited range and sharp peak at the end of the range, the Bragg peak. By actively scanning the proton beam, laterally by scanning magnets and longitudinally by shifting the energy, the position of the Bragg peak can be controlled in all three dimensions, thereby enabling high dose delivery to the target volume only. A typical scanned proton beam line consists of a pair of scanning magnets to perform the lateral beam scanning and possibly a range shifter and a multi-leaf collimator (MLC). Part of this thesis deals with the development of control, supervision and verification methods for the scanned proton beam line at the The Svedberg laboratory in Uppsala, Sweden. Radiotherapy is preceded by treatment planning, where one of the main objectives is predicting the dose to the patient. The dose is calculated by a dose calculation engine and the accuracy of the results is of course dependent on the accuracy and sophistication of the transport and interaction models of the dose engine itself. But, for the dose distribution calculation to have any bearing on the reality, it needs to be started with relevant input in accordance with the beam that is emitted from the treatment machine. This input is provided by the beam model. As such, the beam model is the link between the reality (the treatment machine) and the treatment planning system. The beam model contains methods to characterise the treatment machine and provides the dose calculation with the reconstructed beam phase space, in some convenient representation. In order for a beam model to be applicable in a treatment planning system, its methods have to be general. In this thesis, a beam model for a scanned proton beam is developed. The beam model contains models and descriptions of the beam modifying elements of a scanned proton beam line. Based on a well-defined set of generally applicable characterisation measurements, ten beam model parameters are extracted, describing the basic properties of the beam, i.e. the energy spectrum, the radial and the angular distributions and the nominal direction. Optional beam modifying elements such as a range shifter and an MLC are modelled by dedicated Monte Carlo calculation algorithms. The algorithm that describes the MLC contains a parameterisation of collimator scatter, in which the rather complex phase space of collimator scattered protons has been parameterised by a set of analytical functions. Dose calculations based on the phase space reconstructed by the beam model are in good agreement with experimental data. This holds both for the dose distribution of the elementary pencil beam, reflecting the modelling of the basic properties of the scanned beam, as well as for complete calculations of collimated scanned fields.
200

Analysis and Optimization for Testing Using IEEE P1687

Ghani Zadegan, Farrokh January 2010 (has links)
The IEEE P1687 (IJTAG) standard proposal aims at providing a standardized interface between on-chip embedded test, debug and monitoring logic (instruments), such as scan-chains and temperature sensors, and the Test Access Port of IEEE Standard 1149.1 mainly used for board test. A key feature in P1687 is to include Segment Insertion Bits (SIBs) in the scan path. SIBs make it possible to construct a multitude of different P1687 networks for the same set of instruments, and provide flexibility in test scheduling. The work presented in this thesis consists of two parts. In the first part, analysis regarding test application time is given for P1687 networks while making use of two test schedule types, namely concurrent and sequential test scheduling. Furthermore, formulas and novel algorithms are presented to compute the test time for a given P1687 network and a given schedule type. The algorithms are implemented and employed in extensive experiments on realistic industrial designs. In the second part, design of IEEE P1687 networks is studied. Designing the P1687 network that results in the least test application time for a given set of instruments, is a time-consuming task in the absence of automatic design tools. In this thesis work, novel algorithms are presented for automated design of P1687 networks which are optimized with respect to test application time and the required number of SIBs. The algorithms are implemented and demonstrated in experiments on industrial SOCs.

Page generated in 0.0337 seconds