• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 40
  • 40
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Design of a 405/430 kHz, 100 kW Transformer with Medium Voltage Insulation Sheets

Sharfeldden, Sharifa 27 July 2023 (has links)
To achieve higher power density, converters and components must be able to handle higher voltage and current ratings at higher percentages of efficiency while also maintaining low cost and a compact footprint. To meet such demands, medium-voltage resonant converters have been favored by researchers for their ability to operate at higher switching frequencies. High frequency (HF) operation enables soft switching which, when achieved, reduces switching losses via either zero voltage switching (ZVS) or zero current switching (ZCS) depending on the converter topology. In addition to lower switching losses, the converter operates with low harmonic waveforms which produce less EMI compared to their hard switching counterparts. Finally, these resonant converters can be more compact because higher switching frequencies imply decreased volume of passive components. The passive component which benefits the most from this increased switching frequency is the transformer. The objective of this work is to design a >400 kHz, 100 kW transformer which will provide galvanic isolation in a Solid-State Transformer (SST) based PEBBs while maintaining high efficiency, high power density, and reduced size. This work aims to present a simplified design process for high frequency transformers, highlighting the trade-offs between co-dependent resonant converter and transformer parameters and how to balance them during the design process. This work will also demonstrate a novel high frequency transformer insulation design to achieve a partial discharge inception voltage (PDIV) of >10 kV. / Master of Science / As the world's population expands and countries progress, the demand for electricity that is high-powered, highly efficient, and dependable has increased exponentially. Further, it is integral to the longevity of global life that this development occurs in a fashion that mitigates environmental consequences. The power and technology sectors have been challenged to address the state of global environmental affairs, specifically regarding climate change, carbon dioxide emissions, and resource depletion. To move away from carbon emitting, non-renewable energy sources and processes, renewable energy sources and electric power systems must be integrated into the power grid. However, the challenge lies in the fact that there is not an easy way to interface between these renewable sources and the existing power grid. Such challenges have undermined the widespread adoption of renewable energy systems that are needed to address environmental issues in a timely manner. Recent developments in power electronics have enabled the practical application of the solid-state transformer (SST). The SST aims to replace the current, widespread form of power transformation: the line frequency transformer (50/60 Hz). This transformer is bulky, expensive, and requires a significant amount of additional circuitry to interface with renewable energy sources and electric power systems. The SST overcomes these drawbacks through high frequency operation (>200 kHz) which enables higher power at a reduced size by capitalizing on the indirect proportionality between the two parameters. The realization of the SST and its implementation has the ability to greatly advance the electrification of the transportation industry which is a top contributor to carbon emissions. This work aims to demonstrate a >400 kHz, 100 kW SST with a novel magnetic design and insulation structure suited for electric ship applications.
22

Driving and inhibiting factors in the adoption of open source software in organisations

Greenley, Neil January 2015 (has links)
The aim of this research is to investigate the extent to which Open Source Software (OSS) adoption behaviour can empirically be shown to be governed by a set of self-reported (driving and inhibiting) salient beliefs of key informants in a sample of organisations. Traditional IS adoption/usage theory, methodology and practice are drawn on. These are then augmented with theoretical constructs derived from IT governance and organisational diagnostics to propose an artefact that aids the understanding of organisational OSS adoption behaviour, stimulates debate and aids operational management interventions. For this research, a combination of quantitative methods (via Fisher's Exact Test) and complimentary qualitative method (via Content Analysis) were used using self-selection sampling techniques. In addition, a combination of data and methods were used to establish a set of mixed-methods results (or meta-inferences). From a dataset of 32 completed questionnaires in the pilot study, and 45 in the main study, a relatively parsimonious set of statistically significant driving and inhibiting factors were successfully established (ranging from 95% to 99.5% confidence levels) for a variety for organisational OSS adoption behaviours (i.e. by year, by software category and by stage of adoption). In addition, in terms of mixed-methods, combined quantitative and qualitative data yielded a number of factors limited to a relatively small number of organisational OSS adoption behaviour. The findings of this research are that a relatively small set of driving and inhibiting salient beliefs (e.g. Security, Perpetuity, Unsustainable Business Model, Second Best Perception, Colleagues in IT Dept., Ease of Implementation and Organisation is an Active User) have proven very accurate in predicting certain organisational OSS adoption behaviour (e.g. self-reported Intention to Adopt OSS in 2014) via Binomial Logistic Regression Analysis.
23

Well-log based determination of rock thermal conductivity in the North German Basin

Fuchs, Sven January 2013 (has links)
In sedimentary basins, rock thermal conductivity can vary both laterally and vertically, thus altering the basin’s thermal structure locally and regionally. Knowledge of the thermal conductivity of geological formations and its spatial variations is essential, not only for quantifying basin evolution and hydrocarbon maturation processes, but also for understanding geothermal conditions in a geological setting. In conjunction with the temperature gradient, thermal conductivity represents the basic input parameter for the determination of the heat-flow density; which, in turn, is applied as a major input parameter in thermal modeling at different scales. Drill-core samples, which are necessary to determine thermal properties by laboratory measurements, are rarely available and often limited to previously explored reservoir formations. Thus, thermal conductivities of Mesozoic rocks in the North German Basin (NGB) are largely unknown. In contrast, geophysical borehole measurements are often available for the entire drilled sequence. Therefore, prediction equations to determine thermal conductivity based on well-log data are desirable. In this study rock thermal conductivity was investigated on different scales by (1) providing thermal-conductivity measurements on Mesozoic rocks, (2) evaluating and improving commonly applied mixing models which were used to estimate matrix and pore-filled rock thermal conductivities, and (3) developing new well-log based equations to predict thermal conductivity in boreholes without core control. Laboratory measurements are performed on sedimentary rock of major geothermal reservoirs in the Northeast German Basin (NEGB) (Aalenian, Rhaethian-Liassic, Stuttgart Fm., and Middle Buntsandstein). Samples are obtained from eight deep geothermal wells that approach depths of up to 2,500 m. Bulk thermal conductivities of Mesozoic sandstones range between 2.1 and 3.9 W/(m∙K), while matrix thermal conductivity ranges between 3.4 and 7.4 W/(m∙K). Local heat flow for the Stralsund location averages 76 mW/m², which is in good agreement to values reported previously for the NEGB. For the first time, in-situ bulk thermal conductivity is indirectly calculated for entire borehole profiles in the NEGB using the determined surface heat flow and measured temperature data. Average bulk thermal conductivity, derived for geological formations within the Mesozoic section, ranges between 1.5 and 3.1 W/(m∙K). The measurement of both dry- and water-saturated thermal conductivities allow further evaluation of different two-component mixing models which are often applied in geothermal calculations (e.g., arithmetic mean, geometric mean, harmonic mean, Hashin-Shtrikman mean, and effective-medium theory mean). It is found that the geometric-mean model shows the best correlation between calculated and measured bulk thermal conductivity. However, by applying new model-dependent correction, equations the quality of fit could be significantly improved and the error diffusion of each model reduced. The ‘corrected’ geometric mean provides the most satisfying results and constitutes a universally applicable model for sedimentary rocks. Furthermore, lithotype-specific and model-independent conversion equations are developed permitting a calculation of water-saturated thermal conductivity from dry-measured thermal conductivity and porosity within an error range of 5 to 10%. The limited availability of core samples and the expensive core-based laboratory measurements make it worthwhile to use petrophysical well logs to determine thermal conductivity for sedimentary rocks. The approach followed in this study is based on the detailed analyses of the relationships between thermal conductivity of rock-forming minerals, which are most abundant in sedimentary rocks, and the properties measured by standard logging tools. By using multivariate statistics separately for clastic, carbonate and evaporite rocks, the findings from these analyses allow the development of prediction equations from large artificial data sets that predict matrix thermal conductivity within an error of 4 to 11%. These equations are validated successfully on a comprehensive subsurface data set from the NGB. In comparison to the application of earlier published approaches formation-dependent developed for certain areas, the new developed equations show a significant error reduction of up to 50%. These results are used to infer rock thermal conductivity for entire borehole profiles. By inversion of corrected in-situ thermal-conductivity profiles, temperature profiles are calculated and compared to measured high-precision temperature logs. The resulting uncertainty in temperature prediction averages < 5%, which reveals the excellent temperature prediction capabilities using the presented approach. In conclusion, data and methods are provided to achieve a much more detailed parameterization of thermal models. / Die thermische Modellierung des geologischen Untergrundes ist ein wichtiges Werkzeug bei der Erkundung und Bewertung tiefliegender Ressourcen sedimentärer Becken (e.g., Kohlenwasserstoffe, Wärme). Die laterale und vertikale Temperaturverteilung im Untergrund wird, neben der Wärmestromdichte und der radiogenen Wärmeproduktion, hauptsächlich durch die Wärmeleitfähigkeit (WLF) der abgelagerten Gesteinsschichten bestimmt. Diese Parameter stellen die wesentlichen Eingangsgrößen für thermische Modelle dar. Die vorliegende Dissertation befasst sich mit der Bestimmung der Gesteins-WLF auf verschiedenen Skalen. Dies umfasst (1) laborative WLF-Messungen an mesozoischen Bohrkernproben, (2) die Evaluierung und Verbesserung der Prognosefähigkeit von Mischgesetzten zur Berechnung von Matrix- und Gesamt-WLF sedimentärer Gesteine, sowie (3) die Entwicklung neuer Prognosegleichungen unter Nutzung bohrlochgeophysikalischer Messungen und multivariater Analysemethoden im NGB. Im Nordostdeutschen Becken (NEGB) wurden für die wichtigsten geothermischen Reservoire des Mesozoikums (Aalen, Rhät-Lias-Komplex, Stuttgart Formation, Mittlerer Buntsandstein) Bohrkerne geothermischer Tiefbohrungen (bis 2.500 m Tiefe) auf Ihre thermischen und petrophysikalischen Eigenschaften hin untersucht. Die WLF mesozoischer Sandsteine schwankt im Mittel zwischen 2,1 und 3,9 W/(m∙K), die WLF der Gesteinsmatrix hingegen im Mittel zwischen 3,4 und 7,4 W/(m∙K). Neu berechnete Werte zur Oberflächenwärmestromdichte (e.g., 76 mW/m², Stralsund) stehen im Einklang mit den Ergebnissen früherer Studien im NEGB. Erstmals im NDB wurde für das mesozoisch/känozoischen Intervall am Standort Stralsund ein in-situ WLF-Profil berechnet. In-situ Formations-WLF, für als potentielle Modelschichten interessante, stratigraphische Intervalle, variieren im Mittel zwischen 1,5 und 3,1 W/(m∙K) und bilden eine gute Grundlage für kleinskalige (lokale) thermische Modelle. Auf Grund der in aller Regel nur eingeschränkt verfügbaren Bohrkernproben sowie des hohen laborativen Aufwandes zur Bestimmung der WLF waren alternative Methoden gesucht. Die Auswertung petrophysikalischer Bohrlochmessungen mittels mathematischer-statistischer Methoden stellt einen lang genutzten und erprobten Ansatz dar, welcher in seiner Anwendbarkeit jedoch auf die aufgeschlossenen Gesteinsbereiche (Genese, Geologie, Stratigraphie, etc.) beschränkt ist. Daher wurde ein leicht modifizierter Ansatz entwickelt. Die thermophysikalischen Eigenschaften der 15 wichtigsten gesteinsbildenden Minerale (in Sedimentgesteinen) wurden statistisch analysiert und aus variablen Mischungen dieser Basisminerale ein umfangreicher, synthetischer Datensatz generiert. Dieser wurde mittels multivariater Statistik bearbeitet, in dessen Ergebnis Regressionsgleichungen zur Prognose der Matrix-WLF für drei Gesteinsgruppen (klastisch, karbonatisch, evaporitisch) abgeleitet wurden. In einem zweiten Schritt wurden für ein Echtdatenset (laborativ gemessene WLF und Standardbohrlochmessungen) empirische Prognosegleichungen für die Berechnung der Gesamt-WLF entwickelt. Die berechneten WLF zeigen im Vergleich zu gemessenen WLF Fehler zwischen 5% und 11%. Die Anwendung neu entwickelter, sowie in der Literatur publizierter Verfahren auf den NGB-Datensatz zeigt, dass mit den neu aufgestellten Gleichungen stets der geringste Prognosefehler erreicht wird. Die Inversion neu berechneter WLF-Profile erlaubt die Ableitung synthetischer Temperaturprofile, deren Vergleich zu gemessenen Gesteinstemperaturen in einen mittleren Fehler von < 5% resultiert. Im Rahmen geothermischer Berechnungen werden zur Umrechnung zwischen Matrix- und Gesamt-WLF häufig Zwei-Komponenten-Mischmodelle genutzt (Arithmetisches Mittel, Harmonische Mittel, Geometrisches Mittel, Hashin-Shtrikman Mittel, Effektives-Medium Mittel). Ein umfangreicher Datensatz aus trocken- und gesättigt-gemessenen WLF und Porosität erlaubt die Evaluierung dieser Modelle hinsichtlich Ihrer Prognosefähigkeit. Diese variiert für die untersuchten Modelle stark (Fehler: 5 – 53%), wobei das geometrische Mittel die größte, quantitativ aber weiterhin unbefriedigende Übereinstimmungen zeigt. Die Entwicklung und Anwendung mischmodelspezifischer Korrekturgleichungen führt zu deutlich reduzierten Fehlern. Das korrigierte geometrische Mittel zeigt dabei, bei deutlich reduzierter Fehlerstreubreite, erneut die größte Übereinstimmung zwischen berechneten und gemessenen Werten und scheint ein universell anwendbares Mischmodel für sedimentäre Gesteine zu sein. Die Entwicklung modelunabhängiger, gesteinstypbezogener Konvertierungsgleichungen ermöglicht die Abschätzung der wassergesättigten Gesamt-WLF aus trocken-gemessener WLF und Porosität mit einem mittleren Fehler < 9%. Die präsentierten Daten und die neu entwickelten Methoden erlauben künftig eine detailliertere und präzisere Parametrisierung thermischer Modelle sedimentärer Becken.
24

Evaluation of biometric security systems against artificial fingers

Blommé, Johan January 2003 (has links)
<p>Verification of users’ identities are normally carried out via PIN-codes or ID- cards. Biometric identification, identification of unique body features, offers an alternative solution to these methods. </p><p>Fingerprint scanning is the most common biometric identification method used today. It uses a simple and quick method of identification and has therefore been favored instead of other biometric identification methods such as retina scan or signature verification. </p><p>In this report biometric security systems have been evaluated based on fingerprint scanners. The evaluation method focuses on copies of real fingers, artificial fingers, as intrusion method but it also mentions currently used algorithms for identification and strengths and weaknesses in hardware solutions used. </p><p>The artificial fingers used in the evaluation were made of gelatin, as it resembles the surface of human skin in ways of moisture, electric resistance and texture. Artificial fingers were based on ten subjects whose real fingers and artificial counterpart were tested on three different fingerprint scanners. All scanners tested accepted artificial fingers as substitutes for real fingers. Results varied between users and scanners but the artificial fingers were accepted between about one forth and half of the times. </p><p>Techniques used in image enhancement, minutiae analysis and pattern matching are analyzed. Normalization, binarization, quality markup and low pass filtering are described within image enhancement. In minutiae analysis connectivity numbers, point identification and skeletonization (thinning algorithms) are analyzed. Within pattern matching, direction field analysis and principal component analysis are described. Finally combinations of both minutiae analysis and pattern matching, hybrid models, are mentioned. </p><p>Based on experiments made and analysis of used techniques a recommendation for future use and development of fingerprint scanners is made.</p>
25

Investigation of Stress Changes at Mount St. Helens, Washington, and Receiver Functions at the Katmai Volcanic Group, Alaska, with an Additional Section on the Assessment of Spreadsheet-based Modules.

Lehto, Heather L. 01 January 2012 (has links)
Forecasting eruptions using volcano seismology is a subject that affects the lives and property of millions of people around the world. However, there is still much to learn about the inner workings of volcanoes and how this relates to the chance of eruption. This dissertation attempts to increase the breadth of knowledge aimed at helping to understand when a volcano is likely to erupt and how large that eruption might be. Chapters 2 and 3 focus on a technique that uses changes in the local stress field beneath a volcano to determine the source of these changes and help forecast eruptions, while Chapter 4 focuses on a technique that shows great potential to be used to image magma chambers beneath volcanoes by using receiver functions. In Chapters 2 and 3 the source mechanisms of shallow volcano-tectonic earthquakes recorded at Mount St. Helens are investigated by calculating hypocenter locations and fault plane solutions (FPS) for shallow earthquakes recorded during two eruptive periods (1981-1986 and 2004-2008) and two non-eruptive periods (1987-2004 and 2008-2011). FPS show a mixture of normal, reverse, and strike-slip faulting during all periods, with a sharp increase in strike-slip faulting observed in 1987-1997 and an increase in normal faulting between 1998 and 2004 and again on September 25-29, 2004. FPS P-axis orientations (a proxy for ó1) show a ~90° rotation with respect to regional ó1 (N23°E) during 1981-1986 and 2004-2008, bimodal orientations (~N-S and ~E-W) during 1987-2004, and bimodal orientations at ~N-E and ~S-W from 2008-2011. These orientations are believed to be due to pressurization accompanying the shallow intrusion and subsequent eruption of magma as domes during 1981-1986 and 2004-2008, and the buildup of pore pressure beneath a shallow seismogenic volume during 1987-2004 and 2008-2011. Chapter 4 presents a study using receiver functions, which show the relative response of the Earth beneath a seismometer. Receiver functions are produced by deconvolving the vertical component of a seismogram from the horizontal components. The structure of the ground beneath the seismometer can then be inferred from the arrivals of P-to-S converted phases. Receiver functions were computed for the Katmai Volcanic Group, Alaska, at two seismic stations (KABU and KAKN) between January 2005 and July 2011. Receiver functions from station KABU clearly showed the arrival of the direct P-wave and the arrival from the Moho; however, receiver functions from station KAKN did not show the arrival from the Moho. In addition, changes in the amplitude and polarity of arrivals on receiver functions suggested that the structure beneath both KABU and KAKN was complex. Station KABU is likely underlain by dipping layers and/or anisotropy, while station KAKN may lie over a basin structure, an attenuating body, or some other highly complex structure. However, it is impossible to say for certain what the structure is under either station as the azimuthal coverage is poor and thus the structure is unable to be modeled. This dissertation also includes a section (Chapter 6) on the assessment of spreadsheet-based modules used in two Introductory Physical Geology courses at the University of South Florida (USF). When faculty at USF began using spreadsheet-based modules to help teach students math and geology concepts the students complained that they spent more time learning how to use Excel than they did learning the concepts presented in the modules. To determine whether the sharp learning curve for Excel was hindering learning we divided the students in two Introductory Physical Geology courses into two groups: one group was given a set of modules which instructed them to use Excel for all calculations; the other group was simply told to complete the calculations but was not instructed what method to use. The results of the study show that whether or not the students used Excel had very little to do with the level of learning they achieved. Despite complaints that Excel was hindering their learning, students in the study attained high gains for both the math and geology concepts presented in the modules whether they used Excel or not.
26

Evaluation of biometric security systems against artificial fingers

Blommé, Johan January 2003 (has links)
Verification of users’ identities are normally carried out via PIN-codes or ID- cards. Biometric identification, identification of unique body features, offers an alternative solution to these methods. Fingerprint scanning is the most common biometric identification method used today. It uses a simple and quick method of identification and has therefore been favored instead of other biometric identification methods such as retina scan or signature verification. In this report biometric security systems have been evaluated based on fingerprint scanners. The evaluation method focuses on copies of real fingers, artificial fingers, as intrusion method but it also mentions currently used algorithms for identification and strengths and weaknesses in hardware solutions used. The artificial fingers used in the evaluation were made of gelatin, as it resembles the surface of human skin in ways of moisture, electric resistance and texture. Artificial fingers were based on ten subjects whose real fingers and artificial counterpart were tested on three different fingerprint scanners. All scanners tested accepted artificial fingers as substitutes for real fingers. Results varied between users and scanners but the artificial fingers were accepted between about one forth and half of the times. Techniques used in image enhancement, minutiae analysis and pattern matching are analyzed. Normalization, binarization, quality markup and low pass filtering are described within image enhancement. In minutiae analysis connectivity numbers, point identification and skeletonization (thinning algorithms) are analyzed. Within pattern matching, direction field analysis and principal component analysis are described. Finally combinations of both minutiae analysis and pattern matching, hybrid models, are mentioned. Based on experiments made and analysis of used techniques a recommendation for future use and development of fingerprint scanners is made.
27

Att vara eller inte vara laglösa : En intervjustudie om hur den enskilda arkivsektorn ställer sig till att inkluderas i arkivlagen och deras plats i kulturpolitiken / To be or not to be lawless : An interview study regarding how Swedish private archival institutions respond to the possibility of being included in the Archival Law and their place in cultural politics

Hamrén, Nina, Svelander, Malin January 2020 (has links)
Introduction. The aim of this thesis is to examine how Swedish private archival institutions perceive the possibility of being included in the Archival Law. At present the Archival Law of 1990 only applies to official documents from the public sector. Recently however a proposal to change the legislation so that it in part also applies to private archives has been made in the newly published Archival Inquiry commissioned by the government. A more far-reaching proposal to include the private archives in the law has also been made by the Swedish National Archives. Method. We conducted a qualitative research study using semi-structured interviews with 10 informants from 8 different private archival institutions in Sweden. Analysis. By presenting what has been said regarding legislation for private archives in previous archival inquiries, government propositions and other official reports we frame the idea of legislation for private archives by putting it in its culturalpolitical context. An important concept that permeates this thesis is the concept of cultural heritage and how it relates to private archives. The transcriptions from the interviews were analysed by the use of force-field analysis which has its roots in Karl Lewin’s field theory. Results. By collecting the informants thoughts concerning a new legislation for private archives and analysing them as forces working for (driving forces) and against (restraining forces) change we show the complexities surrounding this issue. Conclusion. In many cases uncertainty of what the consequences of the new legislation will be for the private archival institutions prevents them from supporting the change. Our informants also feel that the Swedish National Archives has a top-down perspective which prevents them from listening to and learn from the private sectors experiences. Collaboration between the public and the private sector seems to be the way forward. This is a two years master’s thesis in Archival Science
28

Evaluating Success Factors in Implementing E-Maintenance in Maintenance, Repair, and Overhaul (MRO) Organizations

Toves, Peter Rocky 01 January 2015 (has links)
Despite more than a decade-long process to transition aircraft maintenance practices from paper-to electronic-based systems, some organizations remain unable to complete this transition. Researchers have indicated that while organizations have invested resources in technology improvements, there remains a limited understanding of the factors that contribute to effectively managing technology-enabled change. The purpose of this case study was to identify and explore socio-technical (ST) factors that inhibit an effective transition from a paper-based system to an electronic-based system for aircraft maintenance. A conceptual model applying theories of change management, technology acceptance, systems thinking, and ST theory informed the research. Thirteen participants provided data via semistructured interviews, field observations, follow-up interviews, other documentation, and a questionnaire. Data were analyzed with open and axial coding techniques to identify themes, which were then crosschecked and triangulated with observation and follow-up interview data. Findings revealed communication issues, a fundamental misconception in training, and a false assumption that all personnel easily acquire computer literacy. Benefits gained from this study should assist maintenance, repair, and overall (MRO) organizations within the Department of Defense to improve current and future technology implementation as the research underscores real-life issues from a comparable organization. The implications for positive social change provide a greater understanding of technology-enabled change and contribute to the development of best practices for technology initiatives that address common ST issues in the MRO workplace.
29

Pattern Formation in Cellular Automaton Models - Characterisation, Examples and Analysis / Musterbildung in Zellulären Automaten Modellen - Charakterisierung, Beispiele und Analyse

Dormann, Sabine 26 October 2000 (has links)
Cellular automata (CA) are fully discrete dynamical systems. Space is represented by a regular lattice while time proceeds in finite steps. Each cell of the lattice is assigned a state, chosen from a finite set of "values". The states of the cells are updated synchronously according to a local interaction rule, whereby each cell obeys the same rule. Formal definitions of deterministic, probabilistic and lattice-gas CA are presented. With the so-called mean-field approximation any CA model can be transformed into a deterministic model with continuous state space. CA rules, which characterise movement, single-component growth and many-component interactions are designed and explored. It is demonstrated that lattice-gas CA offer a suitable tool for modelling such processes and for analysing them by means of the corresponding mean-field approximation. In particular two types of many-component interactions in lattice-gas CA models are introduced and studied. The first CA captures in abstract form the essential ideas of activator-inhibitor interactions of biological systems. Despite of the automaton´s simplicity, self-organised formation of stationary spatial patterns emerging from a randomly perturbed uniform state is observed (Turing pattern). In the second CA, rules are designed to mimick the dynamics of excitable systems. Spatial patterns produced by this automaton are the self-organised formation of spiral waves and target patterns. Properties of both pattern formation processes can be well captured by a linear stability analysis of the corresponding nonlinear mean-field (Boltzmann) equations.
30

Erarbeitung neuer Lehrkonzepte für das Eisenbahnbetriebslabor der TU Dresden

Cichos, Moritz 30 March 2023 (has links)
Diese Masterarbeit hat die Erarbeitung neuer Lehrkonzepte für das Eisenbahnbetriebslabor der TU Dresden zum Ziel. Anlass dafür sind die sich durch die technische Entwicklung verändernden Berufsbilder sowie die zunehmenden wirtschaftlichen Restriktionen für den Betrieb der Lehranlagen. Mithilfe einer Berufsfeld- und Kompetenzanalyse werden zunächst die inhaltlichen Anforderungen an die Lehre untersucht. Im Anschluss erfolgt ein Abgleich mit den derzeitigen Lehrkonzepten. Auf dieser Basis werden konkrete Lehrkonzepte für die jeweiligen Zielgruppen entwickelt, wobei der Fokus auf der studentischen Ausbildung liegt. Die neuen Konzepte berücksichtigen dabei alle relevanten Lehr- und Lernformen. Abschließend wird der Weiterentwicklungsbedarf in Form von organisatorischen, inhaltlichen sowie technischen Maßnahmen für die Umsetzung der Lehrkonzepte präzisiert. Im Rahmen einer Handlungsempfehlung sind insbesondere die Entwicklung eines Lehrstellwerkes und die Einrichtung eines Führerstandsimulators zu pointieren.:1 Einleitung 1.1 Ziel der Arbeit 1.2 Methodische Vorgehensweise 2 Berufsfeldanalyse 2.1 Ermittlung möglicher Berufsfelder 2.1.1 Absolventen VIW 2.1.2 Absolventen BSI 2.1.3 Zusammenfassung 2.2 Berufsfelder im Wandel 2.2.1 Operative Berufsfelder 2.2.2 Nicht-operative Berufsfelder 2.3 Absolventenerhebungen 2.3.1 Erhebung am Institut für Bahnsysteme und öffentlichen Verkehr 2.3.2 Absolventenbefragung Studiengang BSI 3 Zielgruppenspezifische Kompetenzanalyse 3.1 Zielgruppe Einsteiger 3.2 Zielgruppe Fortgeschrittene 4 Analyse und Abgleich derzeitiger Lehrkonzepte 4.1 Grundkurs 4.1.1 Organisatorische Analyse 4.1.2 Inhaltliche Analyse und Kompetenzabgleich 4.1.3 Zusammenfassung 4.2 Aufbaukurs 4.2.1 Organisatorische Analyse 4.2.2 Inhaltliche Analyse und Kompetenzabgleich 4.2.3 Zusammenfassung 5 Entwicklung neuer Lehrkonzepte 5.1 Grundkurs 5.1.1 Vorlesungen 5.1.2 Laborpraktika im EBL 5.1.3 Selbststudium 5.2 Aufbaukurs 5.2.1 Vorlesungen 5.2.2 Übungen 5.2.3 Laborpraktika im EBL 5.2.4 Selbststudium 6 Schlussfolgerungen und Handlungsempfehlung 6.1 Schlussfolgerungen 6.1.1 Organisatorische Schlussfolgerungen 6.1.2 Inhaltliche Schlussfolgerungen und technischer Weiterentwicklungsbedarf .. 93 6.2 Handlungsempfehlung 7 Fazit 7.1 Zusammenfassung und Wertung 7.2 Weiterer Forschungsbedarf und Ausblick / This Master’s thesis focuses on the development of new teaching concepts for the railway operation laboratory at TU Dresden due to increasing economic restrictions for the operation of teaching facilities as well as the changing job profiles as a result of technical developments. At first, the content requirements for teaching are examined by analyzing the professional fields and the imparted competences. These results are then compared with the current teaching concepts. On this basis, concrete teaching concepts are developed for the respective target groups with a special focus on university-level education. The new concepts consider all relevant forms of teaching and learning. Finally, the need for further development is specified in the form of organizational, content-related, and technical measures for the implementation of the teaching concepts. As part of a recommendation for action, the development of a training signal box and the installation of a driver's cab simulator are to be pointed out in particular.:1 Einleitung 1.1 Ziel der Arbeit 1.2 Methodische Vorgehensweise 2 Berufsfeldanalyse 2.1 Ermittlung möglicher Berufsfelder 2.1.1 Absolventen VIW 2.1.2 Absolventen BSI 2.1.3 Zusammenfassung 2.2 Berufsfelder im Wandel 2.2.1 Operative Berufsfelder 2.2.2 Nicht-operative Berufsfelder 2.3 Absolventenerhebungen 2.3.1 Erhebung am Institut für Bahnsysteme und öffentlichen Verkehr 2.3.2 Absolventenbefragung Studiengang BSI 3 Zielgruppenspezifische Kompetenzanalyse 3.1 Zielgruppe Einsteiger 3.2 Zielgruppe Fortgeschrittene 4 Analyse und Abgleich derzeitiger Lehrkonzepte 4.1 Grundkurs 4.1.1 Organisatorische Analyse 4.1.2 Inhaltliche Analyse und Kompetenzabgleich 4.1.3 Zusammenfassung 4.2 Aufbaukurs 4.2.1 Organisatorische Analyse 4.2.2 Inhaltliche Analyse und Kompetenzabgleich 4.2.3 Zusammenfassung 5 Entwicklung neuer Lehrkonzepte 5.1 Grundkurs 5.1.1 Vorlesungen 5.1.2 Laborpraktika im EBL 5.1.3 Selbststudium 5.2 Aufbaukurs 5.2.1 Vorlesungen 5.2.2 Übungen 5.2.3 Laborpraktika im EBL 5.2.4 Selbststudium 6 Schlussfolgerungen und Handlungsempfehlung 6.1 Schlussfolgerungen 6.1.1 Organisatorische Schlussfolgerungen 6.1.2 Inhaltliche Schlussfolgerungen und technischer Weiterentwicklungsbedarf .. 93 6.2 Handlungsempfehlung 7 Fazit 7.1 Zusammenfassung und Wertung 7.2 Weiterer Forschungsbedarf und Ausblick

Page generated in 0.0742 seconds