1 |
Fast Digitizing and Digital Signal Processing of Detector SignalsHannaske, Roland 31 March 2010 (has links) (PDF)
A fast-digitizer data acquisition system recently installed at the neutron time-of-flight experiment nELBE, which is located at the superconducting electron accelerator ELBE of Forschungszentrum Dresden-Rossendorf, is tested with two different detector types. Preamplifier signals from a high-purity germanium detector are digitized, stored and finally processed. For a precise determination of the energy of the detected radiation, the moving-window deconvolution algorithm is used to compensate the ballistic deficit and different shaping algorithms are applied. The energy resolution is determined in an experiment with γ-rays from a 22Na source and is compared to the energy resolution achieved with analogously processed signals. On the other hand, signals from the photomultipliers of barium fluoride and plastic scintillation detectors are digitized. These signals have risetimes of a few nanoseconds only. The moment of interaction of the radiation with the detector is determined by methods of digital signal processing. Therefore, different timing algorithms are implemented and tested with data from an experiment at nELBE. The time resolutions achieved with these algorithms are compared to each other as well as to reference values coming from analog signal processing. In addition to these experiments, some properties of the digitizing hardware are measured and a program for the analysis of stored, digitized data is developed. The analysis of the signals shows that the energy resolution achieved with the 10-bit digitizer system used here is not competitive to a 14-bit peak-sensing ADC, although the ballistic deficit can be fully corrected. However, digital methods give better result in sub-ns timing than analog signal processing.
|
2 |
Werkzeuge für die Modellierung und Simulation in drahtlosen NetzwerkenGuenther, Marco 02 July 2003 (has links) (PDF)
Workshop Mensch-Computer-Vernetzung
|
3 |
Ein Mailsystem mit Vorverarbeitung und DatenbankspeicherungBuschmann, Tilo 23 October 2003 (has links) (PDF)
Workshop Mensch-Computer-Vernetzung
Es handelt sich um ein Konzept um die Daten eines Mailsystems in einer Datenbank zu halten. Dabei ist es wichtig, daß auf die selben Daten über Standardprotokolle (Pop3, Imap) zugegriffen werden kann, aber gleichzeitig aus der Datenbank abrufbar sind. Außerdem bin ich auf Vorverarbeitung von EMails eingegangen (z.B. Attachments gesondert behandeln)
|
4 |
Angular Schematization in Graph DrawingKindermann, Philipp January 2016 (has links) (PDF)
Graphs are a frequently used tool to model relationships among entities. A graph is a binary relation between objects, that is, it consists of a set of objects (vertices) and a set of pairs of objects (edges).
Networks are common examples of modeling data as a graph. For example, relationships between persons in a social network, or network links between computers in a telecommunication network can be represented by a graph.
The clearest way to illustrate the modeled data is to visualize the graphs. The field of Graph Drawing deals with the problem of finding algorithms to automatically generate graph visualizations. The task is to find a "good" drawing, which can be measured by different criteria such as number of crossings between edges or the used area. In this thesis, we study Angular Schematization in Graph Drawing. By this, we mean drawings
with large angles (for example, between the edges at common vertices or at crossing points).
The thesis consists of three parts. First, we deal with the placement of boxes. Boxes are axis-parallel rectangles that can, for example, contain text.
They can be placed on a map to label important sites, or can be used to describe semantic relationships between words in a word network. In the second part of the thesis, we consider graph drawings visually guide the
viewer. These drawings generally induce large angles between edges that meet at a vertex. Furthermore, the edges are drawn crossing-free and in a way that
makes them easy to follow for the human eye. The third and final part is devoted to crossings with large angles. In drawings with crossings, it is important to have large angles between edges at their crossing point, preferably right angles. / Graphen sind häufig verwendete Werkzeuge zur Modellierung von Zusammenhängen zwischen Daten. Ein Graph ist eine binäre Relation zwischen Objekten, das heißt er besteht aus einer Menge von Objekten (Knoten) und einer Menge von Paaren von Objekten (Kanten). Netzwerke sind übliche Beispiele für das Modellieren von Daten als ein Graph. Beispielsweise lassen sich Beziehungen zwischen Personen in einem sozialen Netzwerk oder Netzanbindungen zwischen Computern in einem Telekommunikationsnetz als Graph darstellen.
Die modellierten Daten können am anschaulichsten dargestellt werden, indem man die Graphen visualisiert. Der Bereich des Graphenzeichnens behandelt das Problem, Algorithmen zum automatischen Erzeugen von Graphenvisualisierungen zu finden. Das Ziel ist es, eine "gute" Zeichnung zu finden, was durch unterschiedliche Kriterien gemessen werden kann; zum Beispiel durch die Anzahl der Kreuzungen zwischen Kanten oder durch den Platzverbrauch. In dieser Arbeit beschäftigen wir uns mit Winkelschematisierung im Graphenzeichnen.
Darunter verstehen wir Zeichnungen, in denen die Winkel (zum Beispiel zwischen Kanten an einem gemeinsamen Knoten oder einem Kreuzungspunkt) möglichst groß gestaltet sind.
Die Arbeit besteht aus drei Teilen. Im ersten Teil betrachten wir die Platzierung von Boxen. Boxen sind achsenparallele Rechtecke, die zum Beispiel Text enthalten. Sie können beispielsweise auf einer Karte platziert werden, um wichtige Standorte zu beschriften, oder benutzt werden, um semantische Beziehungen zwischen Wörtern in einem Wortnetzwerk darzustellen. Im zweiten Teil der Arbeit untersuchen wir Graphenzeichnungen, die den Betrachter visuell führen. Im Allgemeinen haben diese Zeichnungen große Winkel zwischen Kanten, die sich in
einem Knoten treffen. Außerdem werden die Verbindungen kreuzungsfrei und so gezeichnet, dass es dem menschlichen Auge leicht fällt, ihnen zu folgen.
Im dritten und letzten Teil geht es um Kreuzungen mit großen Winkeln.
In Zeichnungen mit Kreuzungen ist es wichtig, dass die Winkel zwischen Kanten an Kreuzungspunkten groß sind, vorzugsweise rechtwinklig.
|
5 |
Werkzeuge für die Modellierung und Simulation in drahtlosen Netzwerken02 July 2003 (has links)
Workshop Mensch-Computer-Vernetzung
|
6 |
Ein Mailsystem mit Vorverarbeitung und DatenbankspeicherungBuschmann, Tilo 23 October 2003 (has links)
Workshop Mensch-Computer-Vernetzung
Es handelt sich um ein Konzept um die Daten eines Mailsystems in einer Datenbank zu halten. Dabei ist es wichtig, daß auf die selben Daten über Standardprotokolle (Pop3, Imap) zugegriffen werden kann, aber gleichzeitig aus der Datenbank abrufbar sind. Außerdem bin ich auf Vorverarbeitung von EMails eingegangen (z.B. Attachments gesondert behandeln)
|
7 |
Fast Digitizing and Digital Signal Processing of Detector SignalsHannaske, Roland January 2009 (has links)
A fast-digitizer data acquisition system recently installed at the neutron time-of-flight experiment nELBE, which is located at the superconducting electron accelerator ELBE of Forschungszentrum Dresden-Rossendorf, is tested with two different detector types. Preamplifier signals from a high-purity germanium detector are digitized, stored and finally processed. For a precise determination of the energy of the detected radiation, the moving-window deconvolution algorithm is used to compensate the ballistic deficit and different shaping algorithms are applied. The energy resolution is determined in an experiment with γ-rays from a 22Na source and is compared to the energy resolution achieved with analogously processed signals. On the other hand, signals from the photomultipliers of barium fluoride and plastic scintillation detectors are digitized. These signals have risetimes of a few nanoseconds only. The moment of interaction of the radiation with the detector is determined by methods of digital signal processing. Therefore, different timing algorithms are implemented and tested with data from an experiment at nELBE. The time resolutions achieved with these algorithms are compared to each other as well as to reference values coming from analog signal processing. In addition to these experiments, some properties of the digitizing hardware are measured and a program for the analysis of stored, digitized data is developed. The analysis of the signals shows that the energy resolution achieved with the 10-bit digitizer system used here is not competitive to a 14-bit peak-sensing ADC, although the ballistic deficit can be fully corrected. However, digital methods give better result in sub-ns timing than analog signal processing.
|
8 |
Die stochastische Wissenschaft und zwei Teilsysteme eines Web-basierten Informations- und Anwendungssystems zu ihrer Etablierung / The stochastic science and two subsystems of a web-based information and application system for its establishmentBinder, Andreas January 2006 (has links) (PDF)
Das stochastische Denken, die Bernoullische Stochastik und dessen informationstechnologische Umsetzung, namens Stochastikon stellen die Grundlage für das Verständnis und die erfolgreiche Nutzung einer stochastischen Wissenschaft dar. Im Rahmen dieser Arbeit erfolgt eine Klärung des Begriffs des stochastischen Denkens, eine anschauliche Darstellung der von Elart von Collani entwickelten Bernoullischen Stochastik und eine Beschreibung von Stochastikon. Dabei werden sowohl das Gesamtkonzept von Stochastikon, sowie die Ziele, Aufgaben und die Realisierung der beiden Teilsysteme namens Mentor und Encyclopedia vorgestellt. Das stochastische Denken erlaubt eine realitätsnahe Sichtweise der Dinge, d.h. eine Sichtweise, die mit den menschlichen Beobachtungen und Erfahrungen im Einklang steht und somit die Unsicherheit über zukünftige Entwicklungen berücksichtigt. Der in diesem Kontext verwendete Begriff der Unsicherheit bezieht sich ausschließlich auf zukünftige Entwicklungen und äußert sich in Variabilität. Quellen der Unsicherheit sind einerseits die menschliche Ignoranz und andererseits der Zufall. Unter Ignoranz wird hierbei die Unwissenheit des Menschen über die unbekannten, aber feststehenden Fakten verstanden, die die Anfangsbedingungen der zukünftigen Entwicklung repräsentieren. Die Bernoullische Stochastik liefert ein Regelwerk und ermöglicht die Entwicklung eines quantitativen Modells zur Beschreibung der Unsicherheit und expliziter Einbeziehung der beiden Quellen Ignoranz und Zufall. Das Modell trägt den Namen Bernoulli-Raum und bildet die Grundlage für die Herleitung quantitativer Verfahren, um zuverlässige und genaue Aussagen sowohl über die nicht-existente zufällige Zukunft (Vorhersageverfahren), als auch über die unbekannte feststehende Vergangenheit (Messverfahren). Das Softwaresystem Stochastikon implementiert die Bernoullische Stochastik in Form einer Reihe autarker, miteinander kommunizierender Teilsysteme. Ziel des Teilsystems Encyclopedia ist die Bereitstellung und Bewertung stochastischen Wissens. Das Teilsystem Mentor dient der Unterstützung des Anwenders bei der Problemlösungsfindung durch Identifikation eines richtigen Modells bzw. eines korrekten Bernoulli-Raums. Der Lösungsfindungsprozess selber enthält keinerlei Unsicherheit. Die ganze Unsicherheit steckt in der Lösung, d.h. im Bernoulli-Raum, der explizit die vorhandene Unwissenheit (Ignoranz) und den vorliegenden Zufall abdeckend enthält. / Stochastic thinking, Bernoulli stochastics and its information technological realization, called Stochastikon, represent the basis for understanding and successfully utilizing stochastic science. This thesis defines the concept of stochastic thinking, introduces Bernoulli stochastics, which has been developed by Elart von Collani, and describes the IT system Stochastikon. The concept and the design of Stochastikon are outlined and the aims, tasks and realizations of the two subsystems Mentor and Encyclopedia are given in detail. Stochastic thinking enables a realistic view of reality. This means a view, which is in agreement with observation and experience and, thus, takes into account uncertainty about future developments. In this context the term of uncertainty is used exclusively with respect to future development and materializes in variability. Sources of uncertainty are on the one hand human ignorance about fixed facts on the one hand and randomness on the other. Bernoulli stochastics makes available a set of rules for developing a quantitative model about uncertainty taking particularly into account the two sources ignorance and randomness. The model is called Bernoulli-Space, which is the basis for reliable and precise quantitative procedures for statements about the random future (prediction procedures) as well as about the unknown fixed past (measurement procedures). The software system, called Stochastikon, implements Bernoulli stochastics based on a set of self-sustained intercommunicating subsystems. The Subsystem Encyclopedia makes stochastical knowledge available, while the Subsystem Mentor supports the user for solving (stochastic) problems by identifying the correct model respectively correct Bernoulli-Space. The problem solving process is free of uncertainty, because all uncertainty is modelled by Bernoulli-space.
|
9 |
Resilience, Provisioning, and Control for the Network of the Future / Ausfallsicherheit, Dimensionierungsansätze und Kontrollmechanismen für das Netz der ZukunftMartin, Rüdiger January 2008 (has links) (PDF)
The Internet sees an ongoing transformation process from a single best-effort service network into a multi-service network. In addition to traditional applications like e-mail,WWW-traffic, or file transfer, future generation networks (FGNs) will carry services with real-time constraints and stringent availability and reliability requirements like Voice over IP (VoIP), video conferencing, virtual private networks (VPNs) for finance, other real-time business applications, tele-medicine, or tele-robotics. Hence, quality of service (QoS) guarantees and resilience to failures are crucial characteristics of an FGN architecture. At the same time, network operations must be efficient. This necessitates sophisticated mechanisms for the provisioning and the control of future communication infrastructures. In this work we investigate such echanisms for resilient FGNs. There are many aspects of the provisioning and control of resilient FGNs such as traffic matrix estimation, traffic characterization, traffic forecasting, mechanisms for QoS enforcement also during failure cases, resilient routing, or calability concerns for future routing and addressing mechanisms. In this work we focus on three important aspects for which performance analysis can deliver substantial insights: load balancing for multipath Internet routing, fast resilience concepts, and advanced dimensioning techniques for resilient networks. Routing in modern communication networks is often based on multipath structures, e.g., equal-cost multipath routing (ECMP) in IP networks, to facilitate traffic engineering and resiliency. When multipath routing is applied, load balancing algorithms distribute the traffic over available paths towards the destination according to pre-configured distribution values. State-of-the-art load balancing algorithms operate either on the packet or the flow level. Packet level mechanisms achieve highly accurate traffic distributions, but are known to have negative effects on the performance of transport protocols and should not be applied. Flow level mechanisms avoid performance degradations, but at the expense of reduced accuracy. These inaccuracies may have unpredictable effects on link capacity requirements and complicate resource management. Thus, it is important to exactly understand the accuracy and dynamics of load balancing algorithms in order to be able to exercise better network control. Knowing about their weaknesses, it is also important to look for alternatives and to assess their applicability in different networking scenarios. This is the first aspect of this work. Component failures are inevitable during the operation of communication networks and lead to routing disruptions if no special precautions are taken. In case of a failure, the robust shortest-path routing of the Internet reconverges after some time to a state where all nodes are again reachable – provided physical connectivity still exists. But stringent availability and reliability criteria of new services make a fast reaction to failures obligatory for resilient FGNs. This led to the development of fast reroute (FRR) concepts for MPLS and IP routing. The operations of MPLS-FRR have already been standardized. Still, the standards leave some degrees of freedom for the resilient path layout and it is important to understand the tradeoffs between different options for the path layout to efficiently provision resilient FGNs. In contrast, the standardization for IP-FRR is an ongoing process. The applicability and possible combinations of different concepts still are open issues. IP-FRR also facilitates a comprehensive resilience framework for IP routing covering all steps of the failure recovery cycle. These points constitute another aspect of this work. Finally, communication networks are usually over-provisioned, i.e., they have much more capacity installed than actually required during normal operation. This is a precaution for various challenges such as network element failures. An alternative to this capacity overprovisioning (CO) approach is admission control (AC). AC blocks new flows in case of imminent overload due to unanticipated events to protect the QoS for already admitted flows. On the one hand, CO is generally viewed as a simple mechanism, AC as a more complex mechanism that complicates the network control plane and raises interoperability issues. On the other hand, AC appears more cost-efficient than CO. To obtain advanced provisioning methods for resilient FGNs, it is important to find suitable models for irregular events, such as failures and different sources of overload, and to incorporate them into capacity dimensioning methods. This allows for a fair comparison between CO and AC in various situations and yields a better understanding of the strengths and weaknesses of both concepts. Such an advanced capacity dimensioning method for resilient FGNs represents the third aspect of this work. / Das Internet befindet sich gegenwärtig in einem Transformationsprozess von einem Netz mit einer einzigen best-effort Dienstklasse hin zu einem Mehr-Dienste-Netz. Zusätzlich zu herkömmlichen Anwendungen wie E-Mail, WWW oder Datenübertragung werden zukünftige Netze Dienste mit Echtzeitbedürfnissen und strikten Anforderungen an Verfügbarkeit und Zuverlässigkeit wie Voice over IP (VoIP), Videokonferenzdienste, Virtual Private Networks (VPNs) für Finanzanwendungen und andere Geschäftsanwendungen mit Echtzeitanforderungen, Tele-Medizin oder Telerobotik tragen. Daher sind die Gewährleistung von Dienstgüte und Ausfallsicherheit wesentliche Merkmale zukünftiger Netzarchitekturen. Gleichzeitig muss der Netzbetrieb effizient sein. Dies zieht den Bedarf an ausgefeilten Mechanismen für die Dimensionierung und Kontrolle ausfallsicherer Kommunikationsstrukturen nach sich. In dieser Arbeit werden solche Mechanismen, nämlich Lastaufteilung, Konzepte zur schnellen Reaktion im Fehlerfall und intelligente Ansätze zur Netzdimensionierung, untersucht.
|
10 |
Performance Models for UMTS 3.5G Mobile Wireless Systems / Leistungsmodelle für UMTS 3.5G MobilfunksystemeMäder, Andreas January 2008 (has links) (PDF)
Mobile telecommunication systems of the 3.5th generation (3.5G) constitute a first step towards the requirements of an all-IP world. As the denotation suggests, 3.5G systems are not completely new designed from scratch. Instead, they are evolved from existing 3G systems like UMTS or cdma2000. 3.5G systems are primarily designed and optimized for packet-switched best-effort traffic, but they are also intended to increase system capacity by exploiting available radio resources more efficiently. Systems based on cdma2000 are enhanced with 1xEV-DO (EV-DO: evolution, data-optimized). In the UMTS domain, the 3G partnership project (3GPP) specified the High Speed Packet Access (HSPA) family, consisting of High Speed Downlink Packet Access (HSDPA) and its counterpart High Speed Uplink Packet Access (HSUPA) or Enhanced Uplink. The focus of this monograph is on HSPA systems, although the operation principles of other 3.5G systems are similar. One of the main contributions of our work are performance models which allow a holistic view on the system. The models consider user traffic on flow-level, such that only on significant changes of the system state a recalculation of parameters like bandwidth is necessary. The impact of lower layers is captured by stochastic models. This approach combines accurate modeling and the ability to cope with computational complexity. Adopting this approach to HSDPA, we develop a new physical layer abstraction model that takes radio resources, scheduling discipline, radio propagation and mobile device capabilities into account. Together with models for the calculation of network-wide interference and transmit powers, a discrete-event simulation and an analytical model based on a queuing-theoretical approach are proposed. For the Enhanced Uplink, we develop analytical models considering independent and correlated other-cell interference. / Die vorliegende Arbeit beschäftigt sich mit Mobilfunksystemen der Generation 3.5 im Allgemeinen, und mit den UMTS-spezifischen Ausprägungen HSDPA (High Speed Downlink Packet Access) und HSUPA (High Speed Uplink Packet Access) bzw. Enhanced Uplink im speziellen. Es werden integrierte Systeme betrachtet, d.h. 3.5G Datenkanäle koexistieren mit "klassischen" UMTS Datenkanälen wie in den Spezifikationen von UMTS Release ´99 beschrieben.
|
Page generated in 0.0428 seconds