• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • Tagged with
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Untersuchung der Erfolgsfaktoren fest installierter immersiv-audiovisueller Projektionen in Kunstmuseen

Fritsche, Julia Carlotta 22 August 2022 (has links)
Diese Arbeit befasst sich mit der Frage, wie eine immersiv wirkende Projektion in Kunstmuseen erfolgreich gestaltet werden kann und welche Faktoren dazu berücksichtigt werden müssen. Dabei werden die Perspektiven der Besucher sowie der Umsetzer näher betrachtet. Inhalt dieser Projektion können klassische Gemälde oder digitale Kunstwerke sein, die in einer 360° Konstellation im Raum projiziert werden. Zunächst wird der ökonomische Blick auf Kunstmuseen und Ausstellungen behandelt wie auch auf immersiv-audiovisuelle-Projektionen eingegangen. Darauf folgt eine Betrachtung der klassischen Erfolgsfaktorenforschung sowie der Erfolgsfaktoren von Kunstausstellungen. Anhand dieser theoretischen Basis entwickelt die Autorin ein potenzielles Erfolgsfaktorenmodell, das in einer praktischen Untersuchung mittels einer direkten, methodisch und materiell gestützten Ermittlung überprüft wird. Hierfür werden zuerst Nutzerkommentare auf einer Online-Bewertungsseite zum Thema immersiv-audiovisueller Projektionen analysiert worauf sich eine Besucherbefragung in einem Kunstmuseum anschließt, die eine immersiv-audiovisuelle Projektion (IAP) anbietet. Anhand zweier Experteninterviews werden die potenziellen umsetzerbezogenen Erfolgsfaktoren überprüft. Die Autorin dieser Arbeit konnte vier besucherbezogene Haupterfolgsfaktoren und 25 Erfolgsfaktoren herauskristallisieren. Auf umsetzerbezogener Seite sind es zehn Haupterfolgsfaktoren sowie 27 dazugehörige Erfolgsfaktoren. Das Erfolgsfaktorenmodell basiert auf dem Bestreben kulturpolitische sowie ökonomische Ziele zu erreichen. Auf Basis dieser Erfolgsfaktoren werden Handlungsempfehlungen abgeleitet, die sich im Rahmen dieser Arbeit vor allem an die Umsetzer einer IAP wie beispielsweise Kunstmuseen deren Mitarbeiter oder Künstlerkollektive wenden.:I Theoretischer Teil 1 Einleitung 1.1 Ausgangspunkt und Motivation 1.2 Zielsetzung, Fragestellungen und Abgrenzung 1.3 Methodik und Aufbau der Arbeit 2 Das Kunstmuseum: zwischen Kunst und Ökonomie 2.1 Notwendigkeit des ökonomischen Blicks auf Kunstmuseen 2.2 Der ökonomische Blick auf Kunstmuseen 2.3 Ausstellungen und ihr Management 3 Immersiv-audiovisuelle Projektionen 3.1 Die Rolle der digitalen Medien in Museen 3.2 Immersion 3.3 Bedeutung immersiver Projektionsräume für Kunstmuseen 3.4 Umsetzung einer immersiv-audiovisuellen Projektion 3.5 Anbieter fest installierter IAPs in Kunstmuseen 4 Erfolgsfaktorenforschung 4.1 Die klassische Erfolgsfaktorenforschung 4.2 Erfolg von Kunstausstellungen 5 Entwicklung eines theoretischen Erfolgsfaktorenmodells für IAP 5.1 Transponierung der Erfolgsfaktoren 5.2 Potenzielle Hierarchisierung 5.3 Theoretisches Erfolgsfaktorenmodell II Praktischer Teil 6 Methodik und Vorgehensweise 6.1 Forschungsfragen und Thesen 6.2 Methodik und Durchführung 7 Ergebnisse der Untersuchung 7.1 Analyse der Bewertungswebseite 7.2 Besucherbefragung 7.3 Experteninterviews 7.4 Zusammenfassung 8 Diskussion 8.1 Bewertung der Methodik und Limitationen 8.2 Bewertung der Ergebnisse 8.3 Implikationen für zukünftige Arbeiten 9 Handlungsempfehlungen 10 Fazit 11 Ausblick / This work deals with the question of how an immersive projection in art museums can be successfully designed and which factors have to be taken into account. The perspectives of the visitors as well as those of the implementers are examined more closely. The content of this projection can be classical paintings or digital artworks, which are projected in a 360° constellation in space. First of all, the economic view of art museums and exhibitions is treated as well as immersive audio-visual projections. This is followed by an examination of classical success factor research and the success factors of art exhibitions. On the basis of this theoretical foundation, the author develops a potential success factor model, which will be tested in a practical investigation by means of a direct, methodically and materially supported determination. For this purpose, user comments on an online evaluation page on the topic of immersive audiovisual projections are first analysed, followed by a visitor survey in an art museum, which is offered by an IAP. Based on two expert interviews, the potential implementation-related success factors are examined. The author of this work was able to crystallize four visitor-related main success factors and 25 associated success factors. On the implementer-related side, there are ten main success factors and 27 associated success factors. The success factor model is based on the endeavour to achieve cultural-political and economic goals. On the basis of these success factors, action-orientated recommendations are derived, which in the context of this work are addressed primarily to the implementers of an IAP, such as art museums, their employees or artists’ collectives.:I Theoretischer Teil 1 Einleitung 1.1 Ausgangspunkt und Motivation 1.2 Zielsetzung, Fragestellungen und Abgrenzung 1.3 Methodik und Aufbau der Arbeit 2 Das Kunstmuseum: zwischen Kunst und Ökonomie 2.1 Notwendigkeit des ökonomischen Blicks auf Kunstmuseen 2.2 Der ökonomische Blick auf Kunstmuseen 2.3 Ausstellungen und ihr Management 3 Immersiv-audiovisuelle Projektionen 3.1 Die Rolle der digitalen Medien in Museen 3.2 Immersion 3.3 Bedeutung immersiver Projektionsräume für Kunstmuseen 3.4 Umsetzung einer immersiv-audiovisuellen Projektion 3.5 Anbieter fest installierter IAPs in Kunstmuseen 4 Erfolgsfaktorenforschung 4.1 Die klassische Erfolgsfaktorenforschung 4.2 Erfolg von Kunstausstellungen 5 Entwicklung eines theoretischen Erfolgsfaktorenmodells für IAP 5.1 Transponierung der Erfolgsfaktoren 5.2 Potenzielle Hierarchisierung 5.3 Theoretisches Erfolgsfaktorenmodell II Praktischer Teil 6 Methodik und Vorgehensweise 6.1 Forschungsfragen und Thesen 6.2 Methodik und Durchführung 7 Ergebnisse der Untersuchung 7.1 Analyse der Bewertungswebseite 7.2 Besucherbefragung 7.3 Experteninterviews 7.4 Zusammenfassung 8 Diskussion 8.1 Bewertung der Methodik und Limitationen 8.2 Bewertung der Ergebnisse 8.3 Implikationen für zukünftige Arbeiten 9 Handlungsempfehlungen 10 Fazit 11 Ausblick
2

Texturierung und Visualisierung virtueller 3D-Stadtmodelle / Texturing and Visualization of Virtual 3D City Models

Lorenz, Haik January 2011 (has links)
Im Mittelpunkt dieser Arbeit stehen virtuelle 3D-Stadtmodelle, die Objekte, Phänomene und Prozesse in urbanen Räumen in digitaler Form repräsentieren. Sie haben sich zu einem Kernthema von Geoinformationssystemen entwickelt und bilden einen zentralen Bestandteil geovirtueller 3D-Welten. Virtuelle 3D-Stadtmodelle finden nicht nur Verwendung als Mittel für Experten in Bereichen wie Stadtplanung, Funknetzplanung, oder Lärmanalyse, sondern auch für allgemeine Nutzer, die realitätsnah dargestellte virtuelle Städte in Bereichen wie Bürgerbeteiligung, Tourismus oder Unterhaltung nutzen und z. B. in Anwendungen wie GoogleEarth eine räumliche Umgebung intuitiv erkunden und durch eigene 3D-Modelle oder zusätzliche Informationen erweitern. Die Erzeugung und Darstellung virtueller 3D-Stadtmodelle besteht aus einer Vielzahl von Prozessschritten, von denen in der vorliegenden Arbeit zwei näher betrachtet werden: Texturierung und Visualisierung. Im Bereich der Texturierung werden Konzepte und Verfahren zur automatischen Ableitung von Fototexturen aus georeferenzierten Schrägluftbildern sowie zur Speicherung oberflächengebundener Daten in virtuellen 3D-Stadtmodellen entwickelt. Im Bereich der Visualisierung werden Konzepte und Verfahren für die multiperspektivische Darstellung sowie für die hochqualitative Darstellung nichtlinearer Projektionen virtueller 3D-Stadtmodelle in interaktiven Systemen vorgestellt. Die automatische Ableitung von Fototexturen aus georeferenzierten Schrägluftbildern ermöglicht die Veredelung vorliegender virtueller 3D-Stadtmodelle. Schrägluftbilder bieten sich zur Texturierung an, da sie einen Großteil der Oberflächen einer Stadt, insbesondere Gebäudefassaden, mit hoher Redundanz erfassen. Das Verfahren extrahiert aus dem verfügbaren Bildmaterial alle Ansichten einer Oberfläche und fügt diese pixelpräzise zu einer Textur zusammen. Durch Anwendung auf alle Oberflächen wird das virtuelle 3D-Stadtmodell flächendeckend texturiert. Der beschriebene Ansatz wurde am Beispiel des offiziellen Berliner 3D-Stadtmodells sowie der in GoogleEarth integrierten Innenstadt von München erprobt. Die Speicherung oberflächengebundener Daten, zu denen auch Texturen zählen, wurde im Kontext von CityGML, einem international standardisierten Datenmodell und Austauschformat für virtuelle 3D-Stadtmodelle, untersucht. Es wird ein Datenmodell auf Basis computergrafischer Konzepte entworfen und in den CityGML-Standard integriert. Dieses Datenmodell richtet sich dabei an praktischen Anwendungsfällen aus und lässt sich domänenübergreifend verwenden. Die interaktive multiperspektivische Darstellung virtueller 3D-Stadtmodelle ergänzt die gewohnte perspektivische Darstellung nahtlos um eine zweite Perspektive mit dem Ziel, den Informationsgehalt der Darstellung zu erhöhen. Diese Art der Darstellung ist durch die Panoramakarten von H. C. Berann inspiriert; Hauptproblem ist die Übertragung des multiperspektivischen Prinzips auf ein interaktives System. Die Arbeit stellt eine technische Umsetzung dieser Darstellung für 3D-Grafikhardware vor und demonstriert die Erweiterung von Vogel- und Fußgängerperspektive. Die hochqualitative Darstellung nichtlinearer Projektionen beschreibt deren Umsetzung auf 3D-Grafikhardware, wobei neben der Bildwiederholrate die Bildqualität das wesentliche Entwicklungskriterium ist. Insbesondere erlauben die beiden vorgestellten Verfahren, dynamische Geometrieverfeinerung und stückweise perspektivische Projektionen, die uneingeschränkte Nutzung aller hardwareseitig verfügbaren, qualitätssteigernden Funktionen wie z.~B. Bildraumgradienten oder anisotroper Texturfilterung. Beide Verfahren sind generisch und unterstützen verschiedene Projektionstypen. Sie ermöglichen die anpassungsfreie Verwendung gängiger computergrafischer Effekte wie Stilisierungsverfahren oder prozeduraler Texturen für nichtlineare Projektionen bei optimaler Bildqualität. Die vorliegende Arbeit beschreibt wesentliche Technologien für die Verarbeitung virtueller 3D-Stadtmodelle: Zum einen lassen sich mit den Ergebnissen der Arbeit Texturen für virtuelle 3D-Stadtmodelle automatisiert herstellen und als eigenständige Attribute in das virtuelle 3D-Stadtmodell einfügen. Somit trägt diese Arbeit dazu bei, die Herstellung und Fortführung texturierter virtueller 3D-Stadtmodelle zu verbessern. Zum anderen zeigt die Arbeit Varianten und technische Lösungen für neuartige Projektionstypen für virtueller 3D-Stadtmodelle in interaktiven Visualisierungen. Solche nichtlinearen Projektionen stellen Schlüsselbausteine dar, um neuartige Benutzungsschnittstellen für und Interaktionsformen mit virtuellen 3D-Stadtmodellen zu ermöglichen, insbesondere für mobile Geräte und immersive Umgebungen. / This thesis concentrates on virtual 3D city models that digitally encode objects, phenomena, and processes in urban environments. Such models have become core elements of geographic information systems and constitute a major component of geovirtual 3D worlds. Expert users make use of virtual 3D city models in various application domains, such as urban planning, radio-network planning, and noise immision simulation. Regular users utilize virtual 3D city models in domains, such as tourism, and entertainment. They intuitively explore photorealistic virtual 3D city models through mainstream applications such as GoogleEarth, which additionally enable users to extend virtual 3D city models by custom 3D models and supplemental information. Creation and rendering of virtual 3D city models comprise a large number of processes, from which texturing and visualization are in the focus of this thesis. In the area of texturing, this thesis presents concepts and techniques for automatic derivation of photo textures from georeferenced oblique aerial imagery and a concept for the integration of surface-bound data into virtual 3D city model datasets. In the area of visualization, this thesis presents concepts and techniques for multiperspective views and for high-quality rendering of nonlinearly projected virtual 3D city models in interactive systems. The automatic derivation of photo textures from georeferenced oblique aerial imagery is a refinement process for a given virtual 3D city model. Our approach uses oblique aerial imagery, since it provides a citywide highly redundant coverage of surfaces, particularly building facades. From this imagery, our approach extracts all views of a given surface and creates a photo texture by selecting the best view on a pixel level. By processing all surfaces, the virtual 3D city model becomes completely textured. This approach has been tested for the official 3D city model of Berlin and the model of the inner city of Munich accessible in GoogleEarth. The integration of surface-bound data, which include textures, into virtual 3D city model datasets has been performed in the context of CityGML, an international standard for the exchange and storage of virtual 3D city models. We derive a data model from a set of use cases and integrate it into the CityGML standard. The data model uses well-known concepts from computer graphics for data representation. Interactive multiperspective views of virtual 3D city models seamlessly supplement a regular perspective view with a second perspective. Such a construction is inspired by panorama maps by H. C. Berann and aims at increasing the amount of information in the image. Key aspect is the construction's use in an interactive system. This thesis presents an approach to create multiperspective views on 3D graphics hardware and exemplifies the extension of bird's eye and pedestrian views. High-quality rendering of nonlinearly projected virtual 3D city models focuses on the implementation of nonlinear projections on 3D graphics hardware. The developed concepts and techniques focus on high image quality. This thesis presents two such concepts, namely dynamic mesh refinement and piecewise perspective projections, which both enable the use of all graphics hardware features, such as screen space gradients and anisotropic texture filtering under nonlinear projections. Both concepts are generic and customizable towards specific projections. They enable the use of common computer graphics effects, such as stylization effects or procedural textures, for nonlinear projections at optimal image quality and interactive frame rates. This thesis comprises essential techniques for virtual 3D city model processing. First, the results of this thesis enable automated creation of textures for and their integration as individual attributes into virtual 3D city models. Hence, this thesis contributes to an improved creation and continuation of textured virtual 3D city models. Furthermore, the results provide novel approaches to and technical solutions for projecting virtual 3D city models in interactive visualizations. Such nonlinear projections are key components of novel user interfaces and interaction techniques for virtual 3D city models, particularly on mobile devices and in immersive environments.
3

Robust utility maximization, f-projections, and risk constraints

Gundel, Anne 01 June 2006 (has links)
Ein wichtiges Gebiet der Finanzmathematik ist die Bestimmung von Auszahlungsprofilen, die den erwarteten Nutzen eines Agenten unter einer Budgetrestriktion maximieren. Wir charakterisieren optimale Auszahlungsprofile für einen Agenten, der unsicher ist in Bezug auf das genaue Marktmodell. Der hier benutzte Dualitätsansatz führt zu einem Minimierungsproblem für bestimmte konvexe Funktionale über zwei Mengen von Wahrscheinlichkeitsmaßen, das wir zunächst lösen müssen. Schließlich führen wir noch eine zweite Restriktion ein, die das Risiko beschränkt, das der Agent eingehen darf. Wir gehen dabei wie folgt vor: Kapitel 1. Wir betrachten das Problem, die f-Divergenz f(P|Q) über zwei Mengen von Wahrscheinlichkeitsmaßen zu minimieren, wobei f eine konvexe Funktion ist. Wir zeigen, dass unter der Bedingung "f( undendlich ) / undendlich = undendlich" Minimierer existieren, falls die erste Menge abgeschlossen und die zweite schwach kompakt ist. Außerdem zeigen wir, dass unter der Bedingung "f( undendlich ) / undendlich = 0" ein Minimierer in einer erweiterten Klasse von Martingalmaßen existiert, falls die zweite Menge schwach kompakt ist. Kapitel 2. Die Existenzresultate aus dem ersten Kapitel implizieren die Existenz eines Auszahlungsprofils, das das robuste Nutzenfunktional inf E_Q[u(X)] über eine Menge von finanzierbaren Auszahlungen maximiert, wobei das Infimum über eine Menge von Modellmaßen betrachtet wird. Die entscheidende Idee besteht darin, die minimierenden Maße aus dem ersten Kapitel als gewisse "worst-case"-Maße zu identifizieren. Kapitel 3. Schließlich fordern wir, dass das Risiko der Auszahlungsprofile beschränkt ist. Wir lösen das robuste Problem in einem unvollständigen Marktmodell für Nutzenfunktionen, die nur auf der positiven Halbachse definiert sind. In einem Beispiel vergleichen wir das optimale Auszahlungsprofil unter der Risikorestriktion mit den optimalen Auszahlungen ohne eine solche Restriktion und unter einer Value-at-Risk-Nebenbedingung. / Finding payoff profiles that maximize the expected utility of an agent under some budget constraint is a key issue in financial mathematics. We characterize optimal contingent claims for an agent who is uncertain about the market model. The dual approach that we use leads to a minimization problem for a certain convex functional over two sets of measures, which we first have to solve. Finally, we incorporate a second constraint that limits the risk that the agent is allowed to take. We proceed as follows: Chapter 1. Given a convex function f, we consider the problem of minimizing the f-divergence f(P|Q) over these two sets of measures. We show that, if the first set is closed and the second set is weakly compact, a minimizer exists if f( infinity ) / infinity = infinity. Furthermore, we show that if the second set of measures is weakly compact and f( infinifty ) / infinity = 0, then there is a minimizer in a class of extended martingale measures. Chapter 2. The existence results in Chapter 1 lead to the existence of a contingent claim which maximizes the robust utility functional inf E_Q[u(X)] over some set of affordable contingent claims, where the infimum is taken over a set of subjective or modell measures. The key idea is to identify the minimizing measures from the first chapter as certain worst case measures. Chapter 3. Finally, we require the risk of the contingent claims to be bounded. We solve the robust problem in an incomplete market for a utility function that is only defined on the positive halfline. In an example we compare the optimal claim under this risk constraint with the optimal claims without a risk constraint and under a value-at-risk constraint.
4

Sampling Algorithms for Evolving Datasets

Gemulla, Rainer 24 October 2008 (has links) (PDF)
Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up the processing of analytic queries and data-mining tasks, to enhance query optimization, and to facilitate information integration. Most of the existing work on database sampling focuses on how to create or exploit a random sample of a static database, that is, a database that does not change over time. The assumption of a static database, however, severely limits the applicability of these techniques in practice, where data is often not static but continuously evolving. In order to maintain the statistical validity of the sample, any changes to the database have to be appropriately reflected in the sample. In this thesis, we study efficient methods for incrementally maintaining a uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions, updates, and deletions. We consider instances of the maintenance problem that arise when sampling from an evolving set, from an evolving multiset, from the distinct items in an evolving multiset, or from a sliding window over a data stream. Our algorithms completely avoid any accesses to the base data and can be several orders of magnitude faster than algorithms that do rely on such expensive accesses. The improved efficiency of our algorithms comes at virtually no cost: the resulting samples are provably uniform and only a small amount of auxiliary information is associated with the sample. We show that the auxiliary information not only facilitates efficient maintenance, but it can also be exploited to derive unbiased, low-variance estimators for counts, sums, averages, and the number of distinct items in the underlying dataset. In addition to sample maintenance, we discuss methods that greatly improve the flexibility of random sampling from a system's point of view. More specifically, we initiate the study of algorithms that resize a random sample upwards or downwards. Our resizing algorithms can be exploited to dynamically control the size of the sample when the dataset grows or shrinks; they facilitate resource management and help to avoid under- or oversized samples. Furthermore, in large-scale databases with data being distributed across several remote locations, it is usually infeasible to reconstruct the entire dataset for the purpose of sampling. To address this problem, we provide efficient algorithms that directly combine the local samples maintained at each location into a sample of the global dataset. We also consider a more general problem, where the global dataset is defined as an arbitrary set or multiset expression involving the local datasets, and provide efficient solutions based on hashing.
5

Organotypic brain slice co-cultures of the dopaminergic system - A model for the identification of neuroregenerative substances and cell populations / Organotypische Co-Kulturen dopaminerger Projektionssysteme- Modelle zur Identifizierung neuroregenerativer Substanzen und Zellpopulationen

Sygnecka, Katja 19 November 2015 (has links) (PDF)
The development of new therapeutical approaches, devised to foster the regeneration of neuronal circuits after injury and/or in neurodegenerative diseases, is of great importance. The impairment of dopaminergic projections is especially severe, because these projections are involved in crucial brain functions such as motor control, reward and cognition. In the work presented here, organotypic brain slice co-cultures of (a) the mesostriatal and (b) the mesocortical dopaminergic projection systems consisting of tissue sections of the ventral tegmental area/substantia nigra (VTA/SN), in combination with the target regions of (a) the striatum (STR) or (b) the prefrontal cortex (PFC), respectively, were used to evaluate different approaches to stimulate neurite outgrowth: (i) inhibition of cAMP/cGMP turnover with 3’,5’ cyclic nucleotide phosphodiesterase inhibitors (PDE-Is), (ii) blockade of calcium currents with nimodipine, and (iii) the co-cultivation with bone marrow-derived mesenchymal stromal/stem cells (BM-MSCs). The neurite growth-promoting properties of the tested substances and cell populations were analyzed by neurite density quantification in the border region between the two brain slices, using biocytin tracing or tyrosine hydroxylase labeling and automated image processing procedures. In addition, toxicological tests and gene expression analyses were conducted. (i) PDE-Is were applied to VTA/SN+STR rat co-cultures. The quantification of neurite density after both biocytin tracing and tyrosine hydroxylase labeling revealed a growth promoting effect of the PDE2A-Is BAY60-7550 and ND7001. The application of the PDE10-I MP-10 did not alter neurite density in comparison to the vehicle control. (ii) The effects of nimodipine were evaluated in VTA/SN+PFC rat co-cultures. A neurite growth-promoting effect of 0.1 µM and 1 µM nimodipine was demonstrated in a projection system of the CNS. In contrast, the application of 10 µM nimodipine did not alter neurite density, compared to the vehicle control, but induced the activation of the apoptosis marker caspase 3. The expression levels of the investigated genes, including Ca2+ binding proteins (Pvalb, S100b), immediate early genes (Arc, Egr1, Egr2, Egr4, Fos and JunB), glial fibrillary acidic protein, and myelin components (Mal, Mog, Plp1) were not significantly changed (with the exception of Egr4) by the treatment with 0.1 µM and 1 µM nimodipine. (iii) Bulk BM-MSCs that were classically isolated by plastic adhesion were compared to the subpopulation Sca-1+Lin-CD45--derived MSCs (SL45-MSCs). The neurite growth-promoting properties of both MSC populations were quantified in VTA/SN+PFC mouse co-cultures. For this purpose, the MSCs were seeded on glass slides that were placed underneath the co-cultures. A significantly enhanced neurite density within the co-cultures was induced by both bulk BM-MSCs and SL45-MSCs. SL45-MSCs increased neurite density to a higher degree. The characterization of both MSC populations revealed that the frequency of fibroblast colony forming units (CFU-f ) is 105-fold higher in SL45-MSCs. SL45-MSCs were morphologically more homogeneous and expressed higher levels of nestin, BDNF and FGF2 compared to bulk BM-MSCs. Thus, this work emphasizes the vast potential for molecular targeting with respect to the development of therapeutic strategies in the enhancement of neurite regrowth.
6

Organotypic brain slice co-cultures of the dopaminergic system - A model for the identification of neuroregenerative substances and cell populations

Sygnecka, Katja 23 October 2015 (has links)
The development of new therapeutical approaches, devised to foster the regeneration of neuronal circuits after injury and/or in neurodegenerative diseases, is of great importance. The impairment of dopaminergic projections is especially severe, because these projections are involved in crucial brain functions such as motor control, reward and cognition. In the work presented here, organotypic brain slice co-cultures of (a) the mesostriatal and (b) the mesocortical dopaminergic projection systems consisting of tissue sections of the ventral tegmental area/substantia nigra (VTA/SN), in combination with the target regions of (a) the striatum (STR) or (b) the prefrontal cortex (PFC), respectively, were used to evaluate different approaches to stimulate neurite outgrowth: (i) inhibition of cAMP/cGMP turnover with 3’,5’ cyclic nucleotide phosphodiesterase inhibitors (PDE-Is), (ii) blockade of calcium currents with nimodipine, and (iii) the co-cultivation with bone marrow-derived mesenchymal stromal/stem cells (BM-MSCs). The neurite growth-promoting properties of the tested substances and cell populations were analyzed by neurite density quantification in the border region between the two brain slices, using biocytin tracing or tyrosine hydroxylase labeling and automated image processing procedures. In addition, toxicological tests and gene expression analyses were conducted. (i) PDE-Is were applied to VTA/SN+STR rat co-cultures. The quantification of neurite density after both biocytin tracing and tyrosine hydroxylase labeling revealed a growth promoting effect of the PDE2A-Is BAY60-7550 and ND7001. The application of the PDE10-I MP-10 did not alter neurite density in comparison to the vehicle control. (ii) The effects of nimodipine were evaluated in VTA/SN+PFC rat co-cultures. A neurite growth-promoting effect of 0.1 µM and 1 µM nimodipine was demonstrated in a projection system of the CNS. In contrast, the application of 10 µM nimodipine did not alter neurite density, compared to the vehicle control, but induced the activation of the apoptosis marker caspase 3. The expression levels of the investigated genes, including Ca2+ binding proteins (Pvalb, S100b), immediate early genes (Arc, Egr1, Egr2, Egr4, Fos and JunB), glial fibrillary acidic protein, and myelin components (Mal, Mog, Plp1) were not significantly changed (with the exception of Egr4) by the treatment with 0.1 µM and 1 µM nimodipine. (iii) Bulk BM-MSCs that were classically isolated by plastic adhesion were compared to the subpopulation Sca-1+Lin-CD45--derived MSCs (SL45-MSCs). The neurite growth-promoting properties of both MSC populations were quantified in VTA/SN+PFC mouse co-cultures. For this purpose, the MSCs were seeded on glass slides that were placed underneath the co-cultures. A significantly enhanced neurite density within the co-cultures was induced by both bulk BM-MSCs and SL45-MSCs. SL45-MSCs increased neurite density to a higher degree. The characterization of both MSC populations revealed that the frequency of fibroblast colony forming units (CFU-f ) is 105-fold higher in SL45-MSCs. SL45-MSCs were morphologically more homogeneous and expressed higher levels of nestin, BDNF and FGF2 compared to bulk BM-MSCs. Thus, this work emphasizes the vast potential for molecular targeting with respect to the development of therapeutic strategies in the enhancement of neurite regrowth.:Table of contents Abbreviations 1 1. Introduction 2 1.1 The dopaminergic system 2 1.2 Neurite regeneration following mechanical lesions of the CNS 7 1.3 Organotypic brain slice co-cultures 8 1.4 Promising substances and cells to enhance neuroregeneration 10 1.5 The aim of the thesis 14 2. The original research articles 16 2.1 Phosphodiesterase 2 inhibitors promote axonal outgrowth in organotypic slice co-cultures 17 2.2 Nimodipine enhances neurite outgrowth in dopaminergic brain slice co-cultures 35 2.3 Mesenchymal stem cells support neuronal fiber growth in an organotypic brain slice co-culture model 50 3. References 66 Appendices 73 Summary 73 Zusammenfassung 78 Curriculum Vitae 84 Track Record 85 Selbständigkeitserklärung 87 Acknowledgments 88
7

Sampling Algorithms for Evolving Datasets

Gemulla, Rainer 20 October 2008 (has links)
Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up the processing of analytic queries and data-mining tasks, to enhance query optimization, and to facilitate information integration. Most of the existing work on database sampling focuses on how to create or exploit a random sample of a static database, that is, a database that does not change over time. The assumption of a static database, however, severely limits the applicability of these techniques in practice, where data is often not static but continuously evolving. In order to maintain the statistical validity of the sample, any changes to the database have to be appropriately reflected in the sample. In this thesis, we study efficient methods for incrementally maintaining a uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions, updates, and deletions. We consider instances of the maintenance problem that arise when sampling from an evolving set, from an evolving multiset, from the distinct items in an evolving multiset, or from a sliding window over a data stream. Our algorithms completely avoid any accesses to the base data and can be several orders of magnitude faster than algorithms that do rely on such expensive accesses. The improved efficiency of our algorithms comes at virtually no cost: the resulting samples are provably uniform and only a small amount of auxiliary information is associated with the sample. We show that the auxiliary information not only facilitates efficient maintenance, but it can also be exploited to derive unbiased, low-variance estimators for counts, sums, averages, and the number of distinct items in the underlying dataset. In addition to sample maintenance, we discuss methods that greatly improve the flexibility of random sampling from a system's point of view. More specifically, we initiate the study of algorithms that resize a random sample upwards or downwards. Our resizing algorithms can be exploited to dynamically control the size of the sample when the dataset grows or shrinks; they facilitate resource management and help to avoid under- or oversized samples. Furthermore, in large-scale databases with data being distributed across several remote locations, it is usually infeasible to reconstruct the entire dataset for the purpose of sampling. To address this problem, we provide efficient algorithms that directly combine the local samples maintained at each location into a sample of the global dataset. We also consider a more general problem, where the global dataset is defined as an arbitrary set or multiset expression involving the local datasets, and provide efficient solutions based on hashing.

Page generated in 0.2613 seconds