• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 644
  • 132
  • 64
  • 63
  • 15
  • 15
  • 15
  • 12
  • 10
  • 10
  • 4
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1204
  • 143
  • 134
  • 121
  • 93
  • 88
  • 77
  • 74
  • 72
  • 71
  • 70
  • 70
  • 69
  • 68
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

SCALING CHALLENGES IN DIGITAL VENTURES

Russ, Ricardo January 2018 (has links)
The number of startups is on the rise, specifically digital startups with products entirely based on software. These companies are facing a resilient challenge when it comes to increasing their user base, revenue or market share. This process is called scaling, which is an essential part for every startup in order to establish themselves in the market. While there are several generic models focusing on scaling a business, there seems to be a lack of scientific research focusing on the challenges during the process of scaling. This paper describes a qualitative study focusing on purely digital companies which have scaled or are trying to scale. Resulting in finding several distinct challenges and barriers related to scaling digital companies, by comparing and contrasting growth literature with the data generated by this study. Besides these challenges, our findings suggest that B2B and B2C companies are facing different challenges during their scaling processes.
312

Power efficient and power attacks resistant system design and analysis using aggressive scaling with timing speculation

Rathnala, Prasanthi January 2017 (has links)
Growing usage of smart and portable electronic devices demands embedded system designers to provide solutions with better performance and reduced power consumption. Due to the new development of IoT and embedded systems usage, not only power and performance of these devices but also security of them is becoming an important design constraint. In this work, a novel aggressive scaling based on timing speculation is proposed to overcome the drawbacks of traditional DVFS and provide security from power analysis attacks at the same time. Dynamic voltage and frequency scaling (DVFS) is proven to be the most suitable technique for power efficiency in processor designs. Due to its promising benefits, the technique is still getting researchers attention to trade off power and performance of modern processor designs. The issues of traditional DVFS are: 1) Due to its pre-calculated operating points, the system is not able to suit to modern process variations. 2) Since Process Voltage and Temperature (PVT) variations are not considered, large timing margins are added to guarantee a safe operation in the presence of variations. The research work presented here addresses these issues by employing aggressive scaling mechanisms to achieve more power savings with increased performance. This approach uses in-situ timing error monitoring and recovering mechanisms to reduce extra timing margins and to account for process variations. A novel timing error detection and correction mechanism, to achieve more power savings or high performance, is presented. This novel technique has also been shown to improve security of processors against differential power analysis attacks technique. Differential power analysis attacks can extract secret information from embedded systems without knowing much details about the internal architecture of the device. Simulated and experimental data show that the novel technique can provide a performance improvement of 24% or power savings of 44% while occupying less area and power overhead. Overall, the proposed aggressive scaling technique provides an improvement in power consumption and performance while increasing the security of processors from power analysis attacks.
313

Statistical analysis for the radiological characterization of radioactive waste in particle accelerators / Analyse statistique pour la caractérisation radiologique des déchets radioactifs au sein des accélérateurs de particules

Zaffora, Biagio 08 September 2017 (has links)
Ce travail de thèse introduit une nouvelle méthode pour la caractérisation radiologique des déchets très faiblement radioactifs produits au sein de l’Organisation Européenne pour la Recherche Nucléaire (CERN). La méthode se base sur : 1. le calcul des radionucléides en présence, i.e. les radionucléides qui peuvent être produits lors de l’interaction des particules avec la matière et les structures environnantes les accélérateurs, 2. la mesure directe des émetteurs gamma et, 3. la quantification des émetteurs alpha et beta purs et de rayons X de faible énergie, appelés radionucléides difficile-a-mesurer (DTM), en utilisant les méthodes dites des «scaling factor» (SF), «correlation factor» (CF) et activité moyenne (MA). La première phase du processus de caractérisation est le calcul des radionucléides en présence à l’aide de codes de calcul analytiques ou Monte Carlo. Après le calcul de l’inventaire radiologique, les radionucléides émetteurs gamma sont mesurés par spectrométrie gamma dans chaque colis de la population. L’émetteur gamma dominant, appelé « key nuclide » (KN), est identifié. La méthode dite des «scaling factors» permet d’estimer l’activité des radionucléides DTM après évaluation de la corrélation entre l’activité des DTM et l’activité de l’émetteur gamma dominant obtenue à partir d’échantillons. Si une corrélation existe, l’activité des radionucléides DTM peut être évaluée grâce à des facteurs de corrélation expérimentaux appelés « scaling factors », sinon l’activité moyenne obtenue à partir d’échantillons prélevés dans la population est attribuée à chaque colis. Lorsque les activités des émetteurs alpha et beta purs et des émetteurs X de faible énergie ne peuvent pas être estimées par mesure la méthode des « correlation factors » s’applique. La méthode des « correlation factors » se base sur le calcul de corrélations théoriques entre l’émetteur gamma dominant et les radionucléides de très faible activité. Cette thèse décrit en détail la nouvelle technique de caractérisation radiologique, montre un cas d’application complet et présente les résultats de l’industrialisation de la méthode ayant permis la caractérisation radiologique de plus de 1000 m3 de déchets radioactifs au CERN entre 2015 et 2017. / This thesis introduces a new method to characterize metallic very-low-level radioactive waste produced at the European Organization for Nuclear Research (CERN). The method is based on: 1. the calculation of a preliminary radionuclide inventory, which is the list of the radionuclides that can be produced when particles interact with a surrounding medium, 2. the direct measurement of gamma emitters and, 3. the quantification of pure-alpha, pure-beta and low-energy X-ray emitters, called difficult-to-measure (DTM) radionuclides, using the so-called scaling factor (SF), correlation factor (CF) and mean activity (MA) techniques. The first stage of the characterization process is the calculation of the radionuclide inventory via either analytical or Monte Carlo codes. Once the preliminary radionuclide inventory is obtained, the gamma-emitting radionuclides are measured via gamma-ray spectrometry on each package of the waste population. The major gamma-emitter, called key nuclide (KN), is also identified. The scaling factor method estimates the activity of DTM radionuclides by checking for a consistent and repeated relationship between the key nuclide and the activity of the difficult to measure radionuclides from samples. If a correlation exists the activity of DTM radiodionuclides can be evaluated using the scaling factor otherwise the mean activity from the samples collected is applied to the entire waste population. Finally, the correlation factor is used when the activity of pure-alpha, pure-beta and low-energy X-ray emitters is so low that cannot be quantified using experimental values. In this case a theoretical correlation factor is obtained from the calculations to link the activity of the radionuclides we want to quantify and the activity of the key nuclide. The thesis describes in detail the characterization method, shows a complete case study and describes the industrial-scale application of the characterization method on over 1’000 m3 of radioactive waste, which was carried out at CERN between 2015 and 2017.
314

Modélisation probabiliste et inférence par l'algorithme Belief Propagation / Probabilistic Modelling and Inference using the Belief Propagation Algorithm

Martin, Victorin 23 May 2013 (has links)
On s'intéresse à la construction et l'estimation - à partir d'observations incomplètes - de modèles de variables aléatoires à valeurs réelles sur un graphe. Ces modèles doivent être adaptés à un problème de régression non standard où l'identité des variables observées (et donc celle des variables à prédire) varie d'une instance à l'autre. La nature du problème et des données disponibles nous conduit à modéliser le réseau sous la forme d'un champ markovien aléatoire, choix justifié par le principe de maximisation d'entropie de Jaynes. L'outil de prédiction choisi dans ces travaux est l'algorithme Belief Propagation - dans sa version classique ou gaussienne - dont la simplicité et l'efficacité permettent son utilisation sur des réseaux de grande taille. Après avoir fourni un nouveau résultat sur la stabilité locale des points fixes de l'algorithme, on étudie une approche fondée sur un modèle d'Ising latent où les dépendances entre variables réelles sont encodées à travers un réseau de variables binaires. Pour cela, on propose une définition de ces variables basée sur les fonctions de répartition des variables réelles associées. Pour l'étape de prédiction, il est nécessaire de modifier l'algorithme Belief Propagation pour imposer des contraintes de type bayésiennes sur les distributions marginales des variables binaires. L'estimation des paramètres du modèle peut aisément se faire à partir d'observations de paires. Cette approche est en fait une manière de résoudre le problème de régression en travaillant sur les quantiles. D'autre part, on propose un algorithme glouton d'estimation de la structure et des paramètres d'un champ markovien gaussien, basé sur l'algorithme Iterative Proportional Scaling. Cet algorithme produit à chaque itération un nouveau modèle dont la vraisemblance, ou une approximation de celle-ci dans le cas d'observations incomplètes, est supérieure à celle du modèle précédent. Cet algorithme fonctionnant par perturbation locale, il est possible d'imposer des contraintes spectrales assurant une meilleure compatibilité des modèles obtenus avec la version gaussienne de Belief Propagation. Les performances des différentes approches sont illustrées par des expérimentations numériques sur des données synthétiques. / In this work, we focus on the design and estimation - from partial observations - of graphical models of real-valued random variables. These models should be suited for a non-standard regression problem where the identity of the observed variables (and therefore of the variables to predict) changes from an instance to the other. The nature of the problem and of the available data lead us to model the network as a Markov random field, a choice consistent with Jaynes' maximum entropy principle. For the prediction task, we turn to the Belief Propagation algorithm - in its classical or Gaussian flavor - which simplicity and efficiency make it usable on large scale networks. After providing a new result on the local stability of the algorithm's fixed points, we propose an approach based on a latent Ising model, where dependencies between real-valued variables are encoded through a network of binary variables. To this end, we propose a definition of these variables using the cumulative distribution functions of the real-valued variables. For the prediction task, it is necessary to modify the Belief Propagation algorithm in order to impose Bayesian-like constraints on marginal distributions of the binary variables. Estimation of the model parameters can easily be performed using only pairwise observations. In fact, this approach is a way to solve the regression problem by working on quantiles.Furthermore, we propose a greedy algorithm for estimating both the structure and the parameters of a Gauss-Markov random field based on the Iterative Proportional Scaling procedure. At each iteration, the algorithm yields a new model which likelihood, or an approximation of it in the case of partial observations,is higher than the one of the previous model. Because of its local perturbation principle, this algorithm allows us to impose spectral constraints, increasing the compatibility with the Gaussian Belief Propagation algorithm. The performances of all approaches are empirically illustrated on synthetic data.
315

Subgingivale parodontopathogene Bakterien und Bezug zur Klinik bei Anwendung von Gengigel® beim scaling and root planing

Renatus, Antonio 04 June 2012 (has links)
Ziel dieser Studie war es, Auswirkungen von subgingival applizierter Hyaluronsäure (Gengigel®) nach scaling and root planing (SRP) auf mikrobiologische Variablen bei Parodontitispatienten nachzuweisen. Dabei wurden die möglichen Zusammenhänge zwischen Ergebnissen der Bakterienspezies und zuvor ermittelten Ergebnissen klinischer Variablen untersucht. An der Untersuchung nahmen 20 Männer und 29 Frauen teil. Es erfolgte eine Randomisierung in zwei Gruppen, bestehend aus einer Testgruppe mit 23 und einer Kontrollgruppe mit 26 Probanden. Bei den Versuchteilnehmern wurde in zwei Sitzungen in einem 24-stündigen Abstand ein SRP mittels Hand- und Ultraschallinstrumenten durchgeführt. Am Ende des SRP wurde in der Testgruppe Gengigel prof® (mit 0,8% Hyaluronsäure) in die parodontalen Taschen eingebracht. Zusätzlich trugen die Probanden der Testgruppe während der folgenden 14 Tage zweimal täglich morgens und abends Gengigel® (0,2%) auf den Gingivarand auf. In der Kontrollgruppe erfolgte das übliche SRP ohne Verwendung von Gengigel®. Alle Probanden wurden zu Beginn der Untersuchung, nach drei und sechs Monaten aus parodontologischer Sicht untersucht. Des Weiteren wurden Proben der Sulkusflüssigkeit für eine spätere Analyse von zehn parodontopathogenen Keimen sowie der Peroxidase- und Granulozytenaktivität gewonnen. Im Gegensatz zur kontinuierlichen Zunahme in der Kontrollgruppe (p=0,035) konnte beim Verlauf der Gesamtbakterienzahl für die Testgruppe keine Veränderung der Keimzahlen (p=0,737) beobachtet werden. In der Testgruppe wurde nach sechs Monaten für Campylobacter rectus, Treponema denticola und Aggregatibacter actinomycetemcomitans eine Reduktion der Keimbelastung festgestellt (p=0,05; p=0,043; p=0,066). Am Ende der Untersuchung waren in der Testgruppe keine Unterschiede in der bakteriellen Besiedlung unterschiedlich tiefer Taschen mehr nachweisbar (p=1). In der Testgruppe bestand eine stark signifikante Korrelation der Granulozytenaktivität mit der Zeit (r=0,443; p<0,0001) und mit der Gesamtbakterienzahl (r=0,336; p=0,009). Die Ergebnisse der Studie weisen auf einen wachstumshemmenden Effekt der Hyaluronsäure auf parodontopathogene Bakterien hin, welcher womöglich auf einer indirekten Interaktion mit dem Immunsystem basiert.
316

Adjuvante systemische Azithromycingabe im Vergleich zu Amoxicillin/Metronidazol bei Scaling and root planing in einer privaten zahnärztlichen Praxis – eine prospektive randomisierte klinische Untersuchung: Adjuvante systemische Azithromycingabe im Vergleich zu Amoxicillin/Metronidazol bei Scaling and root planing in einer privaten zahnärztlichen Praxis – eine prospektive randomisierte klinische Untersuchung

Buchmann, Andreas 01 March 2017 (has links)
Abstract OBJECTIVES: The objective of the present study is to compare the effect of systemic adjunctive use of azithromycin with amoxicillin/metronidazole to scaling and root planing (SRP) in a clinical study. MATERIALS AND METHODS: Data from 60 individuals with chronic periodontitis were evaluated after full-mouth SRP. Antibiotics were given from the first day of SRP, in the test group (n = 29), azithromycin for 3 days and, in the control group (n = 31), amoxicillin/metronidazole for7 days. Probing depth (PD), attachment level (AL), and bleeding on probing (BOP) were recorded at baseline and after 3 and 12 months. Gingival crevicular fluid was analyzed for matrix metalloprotease (MMP)-8 and interleukin (IL)-1beta levels. Subgingival plaque was taken for assessment of the major bacteria associated with periodontitis. RESULTS: In both groups, PD, AL, and BOP were significantly reduced (p < 0.001). A few significant differences between the groups were found; AL and BOP were significantly better in the test than in the control group at the end of the study (p = 0.020 and 0.009). Periodontopathogens were reduced most in the test group. CONCLUSIONS: A noninferiority of the treatment with azithromycin in comparison with amoxicillin/metronidazole can be stated. The administration of azithromycin could be an alternative to the use of amoxicillin/metronidazole adjunctive to SRP in patients with moderate or severe chronic periodontitis; however, a randomized placebo-controlled multicenter study is needed. CLINICAL RELEVANCE: Application of azithromycin as a single antibiotic for 3 days might be considered as an additional adjunctive antibiotic to SRP in selected patients.:Inhaltsverzeichnis 1 Einleitung 7 1.1 Klassifikation der Parodontitiden 7 1.2 Prävalenz der Parodontitis 8 1.3 Parodontales Erregerspektrum 9 2 Aufgabenstellung 12 3 Material und Methode 13 3.1 Erhobene Variablen 15 3.2 Therapie und Nachbehandlung 20 3.3 Biochemische und mikrobiologische Analyse 21 3.3.1 Probenentnahme 21 3.3.2 Analyse 22 3.4 Statistische Auswertung 23 4 Ergebnisse 25 5 Diskussion 33 6 Zusammenfassung 40 7 Literaturverzeichnis 44 8 Anhang 61 9 Publikation 65 10 Lebenslauf 66 11 Danksagung 67 12 Erklärung über die eigenständige Abfassung der Arbeit 68
317

Barriers in Digital Startup Scaling : A case study of Northern Ethiopia

Kakuze, Hyacinthe, Taddele Wedajo, Biniam January 2020 (has links)
The advancement of digital technology has created a pathway for digital start-ups to flourish very rapidly. However, these companies are facing resilient challenges and barriers during their scaling. Scaling is an important stage for ventures to grow their revenue at an exponential rate while keeping operating costs low. Nevertheless, there are several research papers that reflect the challenges and obstacles that hinder the scaling of digital startups. There are a limited number of scientific studies conducted in the context of developing countries. Therefore, this study aims at investigating the key potential contributing factors in northern Ethiopia (Tigrai). In this study, qualitative exploratory research is considered as a suitable and appropriate method to generate contextual understanding. The outcome of the study shows that the most noticeable themes impeding digital startups scaling are market challenges, lack of financing, lack of support from incubators, poor digital infrastructure, digital culture, and regulatory issues. Based on the findings this research critically suggests key applicable recommendations to overcome those challenges.
318

Investigating How Equating Guidelines for Screening and Selecting Common Items Apply When Creating Vertically Scaled Elementary Mathematics Tests

Hardy, Maria Assunta 09 December 2011 (has links) (PDF)
Guidelines to screen and select common items for vertical scaling have been adopted from equating. Differences between vertical scaling and equating suggest that these guidelines may not apply to vertical scaling in the same way that they apply to equating. For example, in equating the examinee groups are assumed to be randomly equivalent, but in vertical scaling the examinee groups are assumed to possess different levels of proficiency. Equating studies that examined the characteristics of the common-item set stress the importance of careful item selection, particularly when groups differ in ability level. Since in vertical scaling cross-level ability differences are expected, the common items' psychometric characteristics become even more important in order to obtain a correct interpretation of students' academic growth. This dissertation applied two screening criteria and two selection approaches to investigate how changes in the composition of the linking sets impacted the nature of students' growth when creating vertical scales for two elementary mathematics tests. The purpose was to observe how well these equating guidelines were applied in the context of vertical scaling. Two separate datasets were analyzed to observe the impact of manipulating the common items' content area and targeted curricular grade level. The same Rasch scaling method was applied for all variations of the linking set. Both the robust z procedure and a variant of the 0.3-logit difference procedure were used to screen unstable common items from the linking sets. (In vertical scaling, a directional item-difficulty difference must be computed for the 0.3-logit difference procedure.) Different combinations of stable common items were selected to make up the linking sets. The mean/mean method was used to compute the equating constant and linearly transform the students' test scores onto the base scale. A total of 36 vertical scales were created. The results indicated that, although the robust z procedure was a more conservative approach to flagging unstable items, the robust z and the 0.3-logit difference procedure produced similar interpretations of students' growth. The results also suggested that the choice of grade-level-targeted common items affected the estimates of students' grade-to-grade growth, whereas the results regarding the choice of content-area-specific common items were inconsistent. The findings from the Geometry and Measurement dataset indicated that the choice of content-area-specific common items had an impact on the interpretation of students' growth, while the findings from the Algebra and Data Analysis/Probability dataset indicated that the choice of content-area-specific common items did not appear to significantly affect students' growth. A discussion of the limitations of the study and possible future research is presented.
319

Comparison of Recommendation Systems for Auto-scaling in the Cloud Environment

Boyapati, Sai Nikhil January 2023 (has links)
Background: Cloud computing’s rapid growth has highlighted the need for efficientresource allocation. While cloud platforms offer scalability and cost-effectiveness for a variety of applications, managing resources to match dynamic workloads remains a challenge. Auto-scaling, the dynamic allocation of resources in response to real-time demand and performance metrics, has emerged as a solution. Traditional rule-based methods struggle with the increasing complexity of cloud applications. Machine Learning models offer promising accuracy by learning from performance metrics and adapting resource allocations accordingly.  Objectives: This thesis addresses the topic of cloud environments auto-scaling recommendations emphasizing the integration of Machine Learning models and significant application metrics. Its primary objectives are determining the critical metrics for accurate recommendations and evaluating the best recommendation techniques for auto-scaling. Methods: The study initially identifies the crucial metrics—like CPU usage and memory consumption that have a substantial impact on auto-scaling selections through thorough experimentation and analysis. Machine Learning(ML) techniques are selected based on literature review, and then further evaluated through thorough experimentation and analysis. These findings establish a foundation for the subsequent evaluation of ML techniques for auto-scaling recommendations. Results: The performance of Random Forests (RF), K-Nearest Neighbors (KNN), and Support Vector Machines (SVM) are investigated in this research. The results show that RF have higher accuracy, precision, and recall which is consistent with the significance of the metrics which are identified earlier. Conclusions: This thesis enhances the understanding of auto-scaling recommendations by combining the findings from metric importance and recommendation technique performance. The findings show the complex interactions between metrics and recommendation methods, establishing the way for the development of adaptive auto-scaling systems that improve resource efficiency and application functionality.
320

Der Einfluss von Prophylaxemaßnahmen auf die Grenzfläche zwischen Zahn und Veneer von polymerbasierten Verbundwerkstoff- sowie polymer-infiltrierten Keramiknetzwerkrestaurationen: Eine in vitro Studie

Unterschütz, Lena 07 June 2024 (has links)
Ziel dieser Studie war es, den Einfluss von Prophylaxemaßnahmen und künstlicher Alterung auf Veneers an menschlichen Zähnen zu untersuchen. Untersucht wurden die externen, marginalen und die internen Grenzflächen, sowie die Oberflächenstruktur der Restaurationsoberflächen. Zweiunddreißig extrahierte Prämolaren wurden mit Veneers aus polymerbasiertem Verbundwerkstoff (RBC) und polymerinfiltriertem Keramiknetzwerk (PICN) restauriert. Künstliche Alterung durch abwechselndes Thermocycling und anschließende Prophylaxemaßnahmen (Pulver-Wasserstrahl mit Glycin-Pulver oder Ultraschall-Scaling) wurde in fünf Zyklen vollzogen. Die externe, marginale Grenzfläche wurde durch Höhenprofilmessungen und die interne Grenzfläche wurde mit Hilfe der Mikro-Röntgen-Computertomographie untersucht. Darüber hinaus wurde die Oberflächenstrukturen der Veneers mit Hilfe der konfokalen Laser-Scanning-Mikroskopie analysiert. Die Anwendung beider Prophylaxeverfahren führte zu einer Vertiefung der externen, marginalen Grenzflächen (10 μm ± 8 μm) bei beiden Verbundwerkstoffen. Darüber hinaus wies die interne Grenzfläche der PICN-Restaurationen, nach beiden Behandlungen und künstlicher Alterung, marginale Lücken auf (16 μm ± 3 μm). Im Gegensatz zu den RBC-Proben wurde eine signifikante Zunahme der Oberflächenrauhigkeit bei PICN-Veneers nach der Ultraschall-Behandlung festgestellt. Es lässt sich zusammenfassen, dass die marginalen und internen Grenzflächenbereiche bei Veneers aus PICN und RBC durch Prophylaxe-Verfahren beeinflusst werden. Darüber hinaus kann es zu einer erhöhten Oberflächenrauigkeit der Veneers kommen, insbesondere bei denen aus PICN nach dem Ultraschall-Scaling, was die Bioadhäsion und Langlebigkeit beeinträchtigen könnte. Nach der zahnärztlichen Prophylaxe ermöglicht die Untersuchung externen und internen Grenzflächen, sowie der der Restauratiosnoberfläche einen präzisen Einblick in die Schädigungsmechanismen und ermöglicht eine Einschätzung der Langlebigkeit.:Abkürzungsverzeichnis III 1 Einführung 1 1.1 Nicht kariöse Zahnhartsubstanzdefekte 2 1.1.1 Abfraktion 3 1.1.2 Abrasion 3 1.1.3 Attrition 3 1.1.4 Erosion 4 1.2 Klassifizierung der Keramiken und der Verbundwerkstoffe 4 1.2.1 Glasmatrix-Keramiken 4 1.2.2 Polykristalline Keramiken 5 1.2.3 Komposit-Matrix-Keramiken 5 1.2.3.1 Grandio Blocs (RBC) 6 1.2.3.2 Enamic (PICN) 6 1.3 Professionelle Zahnreinigung und ihre Auswirkungen 7 2 Publikationsmanuskript 9 3 Zusammenfassung der Arbeit 23 4 Literaturverzeichnis 27 5 Anlagen 31 5.1 Ergänzende Informationen / Supplemental Material 31 5.2 Darstellung des eigenen Beitrags zur Publikationspromotion 33 5.4 Erklärung über die eigenständige Abfassung der Arbeit 35 5.5 Lebenslauf 37 5.6 Eigenes Publikationsverzeichnis 39 5.7 Danksagung 41 / The aim of this study was to investigate the influence of dental prophylaxis cleaning procedures and artificial aging on veneers in human teeth. The external marginal and internal tooth veneer as well as the restoration surfaces were examined. Thirty-two extracted premolars were restored with resin-based composite (RBC) and polymer-infiltrated ceramic network (PICN) veneers. Artificial aging by alternating thermocycling and subsequent prophylaxis procedure (glycine-based powder air polishing or ultrasonic scaling) was conducted for five consecutive cycles. The external marginal interface was examined by height profile measurements and the internal interface was investigated using micro X-ray computed tomography. In addition, the surface texture of the veneer surface was analyzed using confocal laser scanning microscopy. The application of both prophylaxis procedures resulted in a deepening of the marginal interface (10 μm ± 8 μm) for materials. Furthermore, the internal interface of PICN restorations showed marginal gaps after both treatments and artificial aging (16 μm ± 3 μm). In contrast to the RBC specimens, a significant increase in surface roughness was identified for PICN veneers after ultrasonic scaling. The marginal and internal interface regions in veneers fabricated from PICN and RBC were affected by prophylaxis procedures. Furthermore, it may result in increased veneer surface roughness, especially in PICN and after ultrasonic scaling, which might affect bioadhesion and longevity. After dental prophylaxis procedures, examination of the marginal and the internal interface as well as the veneer surface provides a precise insight into damage mechanisms and offers an assessment of longevity.:Abkürzungsverzeichnis III 1 Einführung 1 1.1 Nicht kariöse Zahnhartsubstanzdefekte 2 1.1.1 Abfraktion 3 1.1.2 Abrasion 3 1.1.3 Attrition 3 1.1.4 Erosion 4 1.2 Klassifizierung der Keramiken und der Verbundwerkstoffe 4 1.2.1 Glasmatrix-Keramiken 4 1.2.2 Polykristalline Keramiken 5 1.2.3 Komposit-Matrix-Keramiken 5 1.2.3.1 Grandio Blocs (RBC) 6 1.2.3.2 Enamic (PICN) 6 1.3 Professionelle Zahnreinigung und ihre Auswirkungen 7 2 Publikationsmanuskript 9 3 Zusammenfassung der Arbeit 23 4 Literaturverzeichnis 27 5 Anlagen 31 5.1 Ergänzende Informationen / Supplemental Material 31 5.2 Darstellung des eigenen Beitrags zur Publikationspromotion 33 5.4 Erklärung über die eigenständige Abfassung der Arbeit 35 5.5 Lebenslauf 37 5.6 Eigenes Publikationsverzeichnis 39 5.7 Danksagung 41

Page generated in 0.0317 seconds