• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 140
  • 40
  • 18
  • 16
  • 15
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 271
  • 47
  • 34
  • 34
  • 30
  • 28
  • 28
  • 23
  • 21
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Změny obličejové části lebky na území střední Evropy v průběhu posledních 1200 let / Changes of the facial skeleton in Central Europe during the last 1200 years

Bejdová, Šárka January 2015 (has links)
The objective of the Dissertation was to describe, quantify and interpret to which degree the shape and size of the facial skeleton of people living in the territory of today's Czech Republic in the period from the Early Middle Ages to the present day, i.e. in the course of the approx. last 1200 years, have changed. In this time period, morphological differences between populations, changes in the sexual dimorphism, modularity and allometry of the facial skeleton were examined. The evaluation was based on CT-images of skulls from three historical populations, specifically from the Early Middle Ages, High Middle Ages and the early modern period. The current population was represented by CT-images of living people. We studied the facial skeletons of a total of 329 individuals, of which 183 were men and 146 women. The CT- images were used as a base for the creation of virtual 3D surface models. The facial skeleton was divided into three morphological units, which were further examined. These were the skeleton of the upper face, lower jaw and palate. The statistical processing was carried out applying methods of geometric morphometrics allowing the separate studying of the shape and size variability of the examined units. When comparing the size and shape differences between studied populations it is...
252

Open Innovation by Opening Embedded Systems

Söldner, Constantin, Danzinger, Frank, Roth, Angela, Möslein, Kathrin January 2012 (has links)
1 INTRODUCTION With the increasing capabilities of today’s smart phones, the demand of consumers for new applications has risen dramatically. By opening up these smart phones and providing third parties the opportunity to develop “apps” for their systems, producers like Apple and platform owners like Google can offer much more value to their customers. As smart phones are one kind of embedded systems (ES), the question arises if similar development can also take place in other kinds of embedded systems. ES, consisting of hardware and software, are embedded in a device to realize a specific function, in contrast to personal computers, which serve multiple purposes [4,30]. The notion of incorporating external actors in the innovation process has been coined open innovation which has become increasingly popular in research and practice since Chesbrough introduced the term in 2003 [12]. By opening up their innovation processes for external actors, firms could benefit from internal as well as from external ideas. In this paper, the notion of open innovation will be explored in the context of ES. The case of ES is particularly interesting, as it requires not only the opening of innovation processes, but also the opening of the embedded system itself. Some of these platforms are opened only to a small degree like Apple’s iPhone, in order to enable others to create new applications for it. Similar developments also take place for example in the automotive software domain, especially concerning infotainment systems. However, most kinds of ES have been spared out by this development until now. As more than 98% of all chips manufactured are used for ES [10] and high-performing computer chips are getting cheaper [38], opening considerations could also prove valuable for a large number of other application domains. However, opening up innovation processes in the context of ES is challenging from both an organizational and technical perspective. First of all, embedded systems are subject to a variety of constraints in contrast to multi-purpose computing devices, like realtime and security constraints or costs and resource constraints. Second, ES are quite diverse both in their composition and in terms on their requirements. In this paper, we want to explore, how the different properties of embedded systems influence possible open innovation processes. This will be done by drawing on to the characteristics of firms implementing the three core open innovation processes suggested by Gassmann and Enkel (2004) [15] and conceptually explaining how the characteristics of ES enable or hinder open innovation processes. As a result, a classification of the OI processes in terms of ES characteristics is provided.
253

Community Detection of Anomaly in Large-Scale Network Dissertation - Adefolarin Bolaji .pdf

Adefolarin Alaba Bolaji (10723926) 29 April 2021 (has links)
<p>The detection of anomalies in real-world networks is applicable in different domains; the application includes, but is not limited to, credit card fraud detection, malware identification and classification, cancer detection from diagnostic reports, abnormal traffic detection, identification of fake media posts, and the like. Many ongoing and current researches are providing tools for analyzing labeled and unlabeled data; however, the challenges of finding anomalies and patterns in large-scale datasets still exist because of rapid changes in the threat landscape. </p><p>In this study, I implemented a novel and robust solution that combines data science and cybersecurity to solve complex network security problems. I used Long Short-Term Memory (LSTM) model, Louvain algorithm, and PageRank algorithm to identify and group anomalies in large-scale real-world networks. The network has billions of packets. The developed model used different visualization techniques to provide further insight into how the anomalies in the network are related. </p><p>Mean absolute error (MAE) and root mean square error (RMSE) was used to validate the anomaly detection models, the results obtained for both are 5.1813e-04 and 1e-03 respectively. The low loss from the training phase confirmed the low RMSE at loss: 5.1812e-04, mean absolute error: 5.1813e-04, validation loss: 3.9858e-04, validation mean absolute error: 3.9858e-04. The result from the community detection shows an overall modularity value of 0.914 which is proof of the existence of very strong communities among the anomalies. The largest sub-community of the anomalies connects 10.42% of the total nodes of the anomalies. </p><p>The broader aim and impact of this study was to provide sophisticated, AI-assisted countermeasures to cyber-threats in large-scale networks. To close the existing gaps created by the shortage of skilled and experienced cybersecurity specialists and analysts in the cybersecurity field, solutions based on out-of-the-box thinking are inevitable; this research was aimed at yielding one of such solutions. It was built to detect specific and collaborating threat actors in large networks and to help speed up how the activities of anomalies in any given large-scale network can be curtailed in time.</p><div><div><div> </div> </div> </div> <br>
254

Sparsity-sensitive diagonal co-clustering algorithms for the effective handling of text data

Ailem, Melissa 18 November 2016 (has links)
Dans le contexte actuel, il y a un besoin évident de techniques de fouille de textes pour analyser l'énorme quantité de documents textuelles non structurées disponibles sur Internet. Ces données textuelles sont souvent représentées par des matrices creuses (sparses) de grande dimension où les lignes et les colonnes représentent respectivement des documents et des termes. Ainsi, il serait intéressant de regrouper de façon simultanée ces termes et documents en classes homogènes, rendant ainsi cette quantité importante de données plus faciles à manipuler et à interpréter. Les techniques de classification croisée servent justement cet objectif. Bien que plusieurs techniques existantes de co-clustering ont révélé avec succès des blocs homogènes dans plusieurs domaines, ces techniques sont toujours contraintes par la grande dimensionalité et la sparsité caractérisant les matrices documents-termes. En raison de cette sparsité, plusieurs co-clusters sont principalement composés de zéros. Bien que ces derniers soient homogènes, ils ne sont pas pertinents et doivent donc être filtrés en aval pour ne garder que les plus importants. L'objectif de cette thèse est de proposer de nouveaux algorithmes de co-clustering conçus pour tenir compte des problèmes liés à la sparsité mentionnés ci-dessus. Ces algorithmes cherchent une structure diagonale par blocs et permettent directement d'identifier les co-clusters les plus pertinents, ce qui les rend particulièrement efficaces pour le co-clustering de données textuelles. Dans ce contexte, nos contributions peuvent être résumées comme suit: Tout d'abord, nous introduisons et démontrons l'efficacité d'un nouvel algorithme de co-clustering basé sur la maximisation directe de la modularité de graphes. Alors que les algorithmes de co-clustering existants qui se basent sur des critères de graphes utilisent des approximations spectrales, l'algorithme proposé utilise une procédure d'optimisation itérative pour révéler les co-clusters les plus pertinents dans une matrice documents-termes. Par ailleurs, l'optimisation proposée présente l'avantage d'éviter le calcul de vecteurs propres, qui est une tâche rédhibitoire lorsque l'on considère des données de grande dimension. Ceci est une amélioration par rapport aux approches spectrales, où le calcul des vecteurs propres est nécessaire pour effectuer le co-clustering. Dans un second temps, nous utilisons une approche probabiliste pour découvrir des structures en blocs homogènes diagonaux dans des matrices documents-termes. Nous nous appuyons sur des approches de type modèles de mélanges, qui offrent de solides bases théoriques et une grande flexibilité qui permet de découvrir diverses structures de co-clusters. Plus précisément, nous proposons un modèle de blocs latents parcimonieux avec des distributions de Poisson sous contraintes. De façon intéressante, ce modèle comprend la sparsité dans sa formulation, ce qui le rend particulièrement adapté aux données textuelles. En plaçant l'estimation des paramètres de ce modèle dans le cadre du maximum de vraisemblance et du maximum de vraisemblance classifiante, quatre algorithmes de co-clustering ont été proposées, incluant une variante dure, floue, stochastique et une quatrième variante qui tire profit des avantages des variantes floue et stochastique simultanément. Pour finir, nous proposons un nouveau cadre de fouille de textes biomédicaux qui comprend certains algorithmes de co-clustering mentionnés ci-dessus. Ce travail montre la contribution du co-clustering dans une problématique réelle de fouille de textes biomédicaux. Le cadre proposé permet de générer de nouveaux indices sur les résultats retournés par les études d'association pan-génomique (GWAS) en exploitant les abstracts de la base de données PUBMED. (...) / In the current context, there is a clear need for Text Mining techniques to analyse the huge quantity of unstructured text documents available on the Internet. These textual data are often represented by sparse high dimensional matrices where rows and columns represent documents and terms respectively. Thus, it would be worthwhile to simultaneously group these terms and documents into meaningful clusters, making this substantial amount of data easier to handle and interpret. Co-clustering techniques just serve this purpose. Although many existing co-clustering approaches have been successful in revealing homogeneous blocks in several domains, these techniques are still challenged by the high dimensionality and sparsity characteristics exhibited by document-term matrices. Due to this sparsity, several co-clusters are primarily composed of zeros. While homogeneous, these co-clusters are irrelevant and must be filtered out in a post-processing step to keep only the most significant ones. The objective of this thesis is to propose new co-clustering algorithms tailored to take into account these sparsity-related issues. The proposed algorithms seek a block diagonal structure and allow to straightaway identify the most useful co-clusters, which makes them specially effective for the text co-clustering task. Our contributions can be summarized as follows: First, we introduce and demonstrate the effectiveness of a novel co-clustering algorithm based on a direct maximization of graph modularity. While existing graph-based co-clustering algorithms rely on spectral relaxation, the proposed algorithm uses an iterative alternating optimization procedure to reveal the most meaningful co-clusters in a document-term matrix. Moreover, the proposed optimization has the advantage of avoiding the computation of eigenvectors, a task which is prohibitive when considering high dimensional data. This is an improvement over spectral approaches, where the eigenvectors computation is necessary to perform the co-clustering. Second, we use an even more powerful approach to discover block diagonal structures in document-term matrices. We rely on mixture models, which offer strong theoretical foundations and considerable flexibility that makes it possible to uncover various specific cluster structure. More precisely, we propose a rigorous probabilistic model based on the Poisson distribution and the well known Latent Block Model. Interestingly, this model includes the sparsity in its formulation, which makes it particularly effective for text data. Setting the estimate of this model’s parameters under the Maximum Likelihood (ML) and the Classification Maximum Likelihood (CML) approaches, four co-clustering algorithms have been proposed, including a hard, a soft, a stochastic and a fourth algorithm which leverages the benefits of both the soft and stochastic variants, simultaneously. As a last contribution of this thesis, we propose a new biomedical text mining framework that includes some of the above mentioned co-clustering algorithms. This work shows the contribution of co-clustering in a real biomedical text mining problematic. The proposed framework is able to propose new clues about the results of genome wide association studies (GWAS) by mining PUBMED abstracts. This framework has been tested on asthma disease and allowed to assess the strength of associations between asthma genes reported in previous GWAS as well as discover new candidate genes likely associated to asthma. In a nutshell, while several text co-clustering algorithms already exist, their performance can be substantially increased if more appropriate models and algorithms are available. According to the extensive experiments done on several challenging real-world text data sets, we believe that this thesis has served well this objective.
255

Biom / Biom

Bolcek, Roman Unknown Date (has links)
Our planet has been facing enormous challenges over the last century, caused by population growth, an ever-evolving industry, resulting in ever-increasing CO2 production, rising water levels, misuse of agricultural land and the extinction of animal species. This causes the destruction of the Biome. Architectural and urban tendencies in the construction of cities, which do not change even today, use the maximum area, materials that cannot be recycled, also have a large share in this. Insufficient use of renewable resources, modern agriculture, self-sufficiency, both housing and urban structures. The reason for not using these technologies is largely a political and commercial problem. The aim of this work is to examine the problems we face today and find meaningful solutions. Change existing architectural and urban trends. To create a self-sufficient structure in places where Biomes were destroyed and to create new ones accordingly. These places often have poor living conditions, such as high temperatures, lack of drinking water and overcrowding. With the help of simple rules of working with the landscape and the use of modern technology, create a new biosphere environment, change the climatic conditions in a given place and create suitable conditions for the life of both plant and animal communities. The structure should be inhabited by a certain number of people who will live in modules that will be fully self-sufficient, following the ISS model. Provide plenty of drinking water, food and energy. The structure should be created from plastic waste by new construction technologies, such as 3D printing using nanotechnology and carbon fiber. This should make it fully recyclable and renewable. The goal is to work with one structure and subsequently create another structure.
256

Inherited Ontologies and the Relations between Philosophy of Mind and the Empirical Cognitive Sciences

Rickels, Christopher A. 22 August 2013 (has links)
No description available.
257

Development of a Highly Flexible Geometry Station for Versatile Production Systems in Automotive Body Construction : A Station designed for Joining of Body-in-White Assemblies duringan Integration of Electric Vehicles / Framtagning av en högflexibel geometristation för mångsidiga produktionssyteminom fordonsindustrin

Knutsson, Erik January 2018 (has links)
The research in this report seeks to develop a highly flexible geometry station for joining futureBody-in-White (BiW) assemblies. The goal is to eliminate the need for a complete reconstructionof a production line during integration of new car bodies in a contemporary production.Today's BiW production is performed in sequential lines, where joining equipment is arranged ina specific order for each model geometry. An increasing model portfolio forces manufacturers todevelop production systems that allow an integration of new models. Electrified alternatives ofexisting models are now developed and put into production. These models have similarappearance as conventional models, but with a completely different principle of driveline due tothe propulsion. This means that new interfaces and platforms have to be developed and mustnow be integrated into a current production. Today's production lines are not prepared forcoming changes, and current stations can only handle a limited number of variants. Integration ofnew geometries into a contemporary production line is not sufficiently efficient from aproduction perspective. The goal of the future is to make such an integration possible.Initially, current and future production scenarios were studied. Based on this, four types ofproduction scenarios, which a highly flexible geometry station can be integrated into, were set up.An integration can take place in different ways depending on how a highly flexible geometrystation is compounded, therefore, different cases were created and compared in a case study.Internal and external benchmarking of production systems were made to compare the availablesolutions for increasing stations flexibility in a BiW production.As reference for the project, a concept for a highly flexible geometry station has been developedand is therefore described initially before an additional depth has been realized. The furtherconceptualization of a highly flexible geometry in this report is presented in the form of amorphological composition of technologies that can increase a station's flexibility, as well asvisualization of a station principles through layouts and cycle time charts. The result of theanalysis generated several concepts that hold different degrees of capacity, footprint andflexibility. The focus was to achieve a high level of flexibility for integration of new models, withnew geometries, in a current production. The conclusion was that the highly flexible geometrystation can, in a contemporary production, produce independently in low volumes. Alternatively,produce higher volumes when it is integrated as a complement in a novel, not yet implemented,production concept. / Forskningen i denna rapport syftar till att utveckla en högflexibel geometristation för fogning avkommande Body-in-White-karosser (BiW). Målet är att eliminera behovet av en fullständigrekonstruktion av en produktionslinje under integrering av nya bilar i en samtida produktion.Dagens BiW-produktion sker i sekventiella liner, där fogningsutrustning är arrangerad i enspecifik ordning för respektive modellgeometri. Ett ökande antal modellalternativ drivertillverkare till att utveckla produktionskoncept som möjliggör integration av nya modeller.Elektrificerande alternativ till befintliga modeller utvecklas kontinuerligt. Dessa modeller ärutseendemässigt lika de konventionella modellerna, men med en helt annan princip för drivlina.Det innebär att nya gränssnitt och plattformar har tagits fram och måste nu integreras i ennuvarande produktion. Dagens produktionslinjer är inte förberedda för kommande förändringaroch nuvarande geometristationer kan endast hantera ett begränsat antal karosstyper. Integrationav nya karosstyper i en befintlig produktionslinje är inte är tillräckligt effektivt ur ettproduktionsperspektiv. Framtidens mål är att göra en sådan integration möjlig.Inledningsvis studerades nuvarande- och kommande produktionsscenarion. Utifrån det beskrevsfyra produktionstyper, vilket en högflexibel geometristation kan komma att integreras i. Enintegration kan ske på olika vis beroende på hur en högflexibel geometristation tillämpas, därförjämfördes olika fall av det i en Case-studie. En intern och extern benchmarking avproduktionssystem gjordes för att jämföra de lösningar som finns för att öka flexibiliteten i enBiW-produktion.Som referensunderlag till projektet har ett koncept för en högflexibel geometristation tagits framoch är beskrivet inledningsvis innan en ytterligare fördjupning har realiserats.Konceptualiseringen av en högflexibel geometristation i denna rapport är presenterad i form aven morfologisk sammansättning av teknologier som kan öka en stations flexibilitet, samtvisualisering av en principiell station genom layouter och cykeltidsdiagram. Resultatet av analysengenererade flera koncept som innehar olika grad av kapacitet, fabriksyta och flexibilitet. Fokus varatt uppnå en hög flexibilitetsnivå för integration av nya modeller, med nya geometrier, i ennuvarande produktion. Slutsatsen var att den högflexibla geometristationen kan, i en nutidaproduktion, producera självständigt i låga volymer. Alternativt producera högre volymer då denintegreras som ett komplement till ett ännu inte implementerat nytt produktionskoncept. / Die Forschung in diesem Bericht zielt darauf ab, eine hochflexible Geometrie-Station für das Fügen zukünftiger Rohbau-Baugruppen zu entwickeln. Das Ziel ist es, die Notwendigkeit einer vollständigen Rekonstruktion einer Produktionslinie während der Integration neuer Karosserien in einer modernen Produktion zu beseitigen. Die heutige Rohbau Produktion wird in sequenziellen Linien durchgeführt, wobei die einzelnen Fügeverfahren in einer bestimmten Reihenfolge, angepasst an die jeweilige Modellgeometrie, angeordnet sind. Ein zunehmendes Modellportfolio zwingt die Automobilhersteller zur Entwicklung von Produktionssystemen, die eine Integration neuer Modelle ermöglichen. Elektrifizierte Varianten bestehender Fahrzeugmodelle werden nun entwickelt und in Produktion gebracht. Diese Modelle haben ein ähnliches Erscheinungsbild wie herkömmliche Modelle, jedoch mit einem stark veränderten Antriebskonzept. Dies bedeutet, dass neue Schnittstellen und Plattformen entwickelt wurden und nun in eine aktuelle Produktion integriert werden müssen. Heutige Produktionslinien sind nicht auf kommende Änderungen vorbereitet und können nur eine begrenzte Anzahl von Varianten handhaben. Die Integration neuer Geometrien in eine moderne Produktionslinie ist aus Produktionssicht nicht effizient, aber soll in Zukunft das Ziel sein. Zunächst wurden aktuelle und zukünftige Produktionsszenarien untersucht. Darauf aufbauend wurden vier Arten von Produktionsszenarien erarbeitet, in die eine hochflexible Geometriestation integriert werden kann. Je nach Aufbau der Geostation kann eine Integration auf unterschiedliche Art und Weise erfolgen. Daher wurden in einer Fallstudie unterschiedliche Fälle erstellt und verglichen. Ein Benchmarking mit internen und externen Produktionssystemen wurde durchgeführt, um eine größtmöglichste Flexibilität der Stationen in einer Rohbau Produktion zu erzielen. Als Referenz für das Projekt wurde ein Konzept für eine hochflexible Geometriestation entwickelt und im Rahmen der Thesis dokumentiert, bevor eine zusätzliche Tiefe realisiert wurde. Die weitere Konzeptionierung einer hochflexiblen Geometrie-Station wird in Form einer morphologischen Zusammensetzung von Technologien präsentiert. Dieser kann die Flexibilität einer Station erhöhen und zudem die Visualisierung von Stationsprinzipien, beispielsweise durch Layouts oder Zykluszeitdiagramme, fördern. Das Ergebnis der Analyse erzeugte mehrere vi Konzepte, die unterschiedliche Grade an Kapazität, Grundfläche und Flexibilität beinhalteten. Der Fokus lag auf einer hohen Flexibilität bei der Integration neuer Modelle mit neuen Geometrien in einer aktuellen Produktion. Die Schlussfolgerung war, dass die hochflexible Geometriestation in einer zeitgemäßen Produktion in kleinen Stückzahlen produzieren kann. Alternativ ist die Geo-Station auch als Bestandteil eines noch umzusetzenden Produktionskonzepts integrierbar.
258

Modularization and evaluation of vehicle’s electrical system

Abdo, Nawar January 2019 (has links)
Modularization is a strategy used by many companies, to help them provide their customers with a high variety of customized products efficiently. This is done through the customization of different independent modules, which are connected by standardized interfaces that are shared throughoutthe entire module variety. Scania, being one of the large companies that provide modular products, has been successfully improving their modularization concepts for many years, and is one of the most iconic companies when it comes to modularization of buses, trucks and engines. But with the increasing need ofelectronics integrated in the vehicles, it is becoming more and more important to modularize the electrical system. There is currently an existing, modularized, product architecture for the electrical system, and Scania wants to know how well modularized it is, as there is no unified way that indicates what is considered to be the better solution.To analyze the current state of the electrical system, a systematic method of modularization was used, which would help answer three important questions: Are the modules well defined? Is there a way to systematically compare alternative solutions? What criteria are more important to focus on? Since there is no unified way of modularization, many modularization methods have been created, and each one has been optimized for a certain purpose. This project compares three different modularization methods and then uses one of the methods which is deemed to be the preferred method to help provide the answers that the company seeks when investigating the modularity of the electrical system. As the electrical system is very complex, and the project has limited amount of resources, it was decided to choose one of the control units as an example, which was the APS (air processing system). The literature study showed that the most rewarding method to use was the MFD (Module Function Deployment), as it provides more information about the product and what criteria the company should focus on. It was then decided to use the relevant steps in MFD to analyze the state of the APS as an example of how this method works. / Modularisering är en strategi som används av många företag, för att hjälpa dem att erbjuda sina kunder en mängd olika anpassade produkter på ett effektivt sätt. Detta görs genom anpassning av olika oberoende moduler, som är kopplade med standardiserade gränssnitt som utnyttjas av alla modulvarianterna. Scania, som är ett av de stora företagen som erbjuder modulariserade produkter, har framgångsrikt förbättrat sina modulariseringskoncept under många år och är ett av de mest ikoniska företagen närdet gäller modularisering av bussar, lastbilar och motorer. Men med det ökande behovet av elektronik integrerad i fordonen blir det allt viktigare att modularisera det elektriska systemet. Det finns för närvarande en befintlig, modulär produktarkitektur för det elektriska systemet, och Scania vill veta hur väl modulariserat det är, eftersom det inte finns något enat sätt som anger vad som anses vara den bättre lösningen. För att analysera det elektriska systemets nuvarande tillstånd, måste en systematisk metod förmodularisering användas, vilket skulle hjälpa till att svara på tre viktiga frågor: Är modulerna väldefinierade? Finns det ett sätt att systematiskt jämföra alternativa lösningar? Vilka kriterier är viktigare att fokusera på? Eftersom det inte finns något enhetligt sätt att modularisera har många modulariseringsmetoder skapats, och var och en har optimerats för ett visst ändamål. I projektet jämförs tre olika modulariseringsmetoder och använder sedan en av de metoder som anses vara den föredragna metoden för att hjälpa till att ge svaren som företaget söker när man undersöker modulariteten hos det elektriska systemet. Eftersom det elektriska systemet är väldigt komplext och projektet har begränsat antal resurser beslutades det att välja en av kontrollenheterna som ett exempel, vilket var APS (luftbehandlingssystem). Litteraturstudien visade att den mest givande metoden att använda var MFD (Module FunctionDeployment), eftersom det ger mer information om produkten och vilka kriterier företaget ska fokusera på. Det bestämdes sedan att använda de relevanta stegen i MFD för att analysera APS tillståndet som ett exempel på hur den här metoden fungerar.
259

Form-Factor-Constrained, High Power Density, Extreme Efficiency and Modular Power Converters

Wang, Qiong 18 December 2018 (has links)
Enhancing performance of power electronics converters has always been an interesting topic in the power electronics community. Over the years, researchers and engineers are developing new high performance component, novel converter topologies, smart control methods and optimal design procedures to improve the efficiency, power density, reliability and reducing the cost. Besides pursuing high performance, researchers and engineers are striving to modularize the power electronics converters, which provides redundancy, flexibility and standardization to the end users. The trend of modularization has been seen in photovoltaic inverters, telecommunication power supplies, and recently, HVDC applications. A systematic optimal design approach for modular power converters is developed in this dissertation. The converters are developed for aerospace applications where there are stringent requirement on converter form factor, loss dissipation, thermal management and electromagnetic interference (EMI) performance. This work proposed an optimal design approach to maximize the nominal power of the power converters considering all the constraints, which fully reveals the power processing potential. Specifically, this work studied three-phase active front-end converter, three-phase isolated ac/dc converter and inverter. The key models (with special attention paid to semiconductor switching loss model), detailed design procedures and key design considerations are elaborated. With the proposed design framework, influence of key design variables, e.g. converter topology, switching frequency, etc. is thoroughly studied. Besides optimal design procedure, control issues in paralleling modular converters are discussed. A master-slave control architecture is used. The slave controllers not only follow the command broadcasted by the master controller, but also synchronize the high frequency clock to the master controller. The control architecture eliminates the communication between the slave controllers but keeps paralleled modules well synchronized, enabling a fully modularized design. Furthermore, the implementation issues of modularity are discussed. Although modularizing converters under form factor constraints adds flexibility to the system, it limits the design space by forbidding oversized components. This work studies the influence of the form factor by exploring the maximal nominal power of a double-sized converter module and comparing it with that of two paralleled modules. The tradeoff between modularity and performance is revealed by this study. Another implementation issue is related to EMI. Scaling up system capacity by paralleling converter modules induces EMI issues in both signal level and system level. This work investigates the mechanisms and provides solutions to the EMI problems. / Ph. D. / As penetration of power electronics technologies in electric power delivery keeps increasing, performance of power electronics converters becomes a key factor in energy delivery efficacy and sustainability. Enhancing performance of power electronics converters reduces footprint, energy waste and delivery cost, and ultimately, promoting a sustainable energy use. Over the years, researchers and engineers are developing new technologies, including high performance component, novel converter topologies, smart control methods and optimal design procedures to improve the efficiency, power density, reliability and reducing the cost of power electronics converters. Besides pursuing high performance, researchers and engineers are striving to modularize the power electronics converters, enabling power electronics converters to be used in a “plug-and-play” fashion. Modularization provides redundancy, flexibility and standardization to the end users. The trend of modularization has been seen in applications that process electric power from several Watts to Megawatts. This dissertation discusses the design framework for incorporating modularization into existing converter design procedure, synergically achieving performance optimization and modularity. A systematic optimal design approach for modular power converters is developed in this dissertation. The converters are developed for aerospace applications where there is stringent v requirement on converter dimensions, loss dissipation, and thermal management. Besides, to ensure stable operation of the onboard power system, filters comprising of inductors and capacitors are necessary to reduce the electromagnetic interference (EMI). Owning to the considerable weight and size of the inductors and capacitors, filter design is one of the key component in converter design. This work proposed an optimal design approach that synergically optimizes performance and promotes modularity while complying with the entire aerospace requirement. Specifically, this work studied three-phase active front-end converter, three-phase isolated ac/dc converter and three-phase inverter. The key models, detailed design procedures and key design considerations are elaborated. Experimental results validate the design framework and key models, and demonstrates cutting-edge converter performance. To enable a fully modularized design, control of modular converters, with focus on synchronizing the modular converters, is discussed. This work proposed a communication structure that minimizes communication resources and achieves seamless synchronization among multiple modular converters that operate in parallel. The communication scheme is demonstrated by experiments. Besides, the implementation issues of modularity are discussed. Although modularizing converters under form factor constraints adds flexibility to the system, it limits the design space by forbidding oversized components. This work studies the impact of modularity by comparing performance of a double-sized converter module with two paralleled modules. The tradeoff between modularity and performance is revealed by this study.
260

Das Genfer Modell zur Diskursanalyse - Möglichkeiten und Grenzen seiner Anwendung in der Exegese des Neuen Testaments = The Geneva model of discourse analysis and its application to New Testament exegesis: potential and limitations

Wüsthoff, Cornelia 31 May 2007 (has links)
Summaries in German and English / The Geneva model of discourse analysis is a linguistic tool developed by Eddy Roulet and his team in Geneva. It was first presented in its modular approach in 1999. This dissertation examines whether the Geneva model can be applied to New Testament texts and whether this application yields results for exegesis. I first explain the model with its five basic modules and twelve organization forms, giving examples from German texts as well as simple New Testament examples. Then I apply the model to two New Testament texts (Rom 6:1,11 and John 8:31,42), summarizing the results in relation to exegesis at the end of each analysis. In the final chapter I discuss which parts of the Geneva model should generally be applied to New Testament exegesis, explaining its potential and its limitations and suggesting some areas in which the Geneva model could be complemented by other approaches. / Das Genfer Modell ist ein von Eddy Roulet und seinem Team in Genf entwickelter linguistischer Ansatz zur Diskursanalyse. Er wurde 1999 erstmalig in seiner modularen Auspragung vorgestellt. In der vorliegenden Arbeit wird untersucht, ob dieses Modell auf neutestamentliche Texte angewendet werden kann und ob diese Anwendung einen Ertrag fur die Exegese bringt. Ich erlautere dazu das Modell mit seinen funf Grundmodulen und zwolf Modulverbindungen zunachst an deutschen und einfachen neutestamentlichen Beispielen. Sodann wende ich das Modell auf zwei neutestamentliche Texte an (Rom 6:1-11 und Joh 8:31-42). Am Ende jeder Anwendung fasse ich die Ergebnisse fur die Exegese dieser Texte zusammen. Im Schlusskapitel schliesslich erortere ich, welche Teile des Modells in der Exegese mit Gewinn eingesetzt werden konnen, wo Nutzen und Grenzen seiner Anwendung liegen und in welchen Bereichen das Modell durch andere Ansatze sinnvoll erganzt werden kann. / New Testament / M. Th. (New Testament)

Page generated in 0.0915 seconds