• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 361
  • 157
  • 145
  • 98
  • 92
  • 35
  • 20
  • 19
  • 11
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1084
  • 189
  • 185
  • 119
  • 105
  • 99
  • 81
  • 71
  • 71
  • 69
  • 68
  • 65
  • 50
  • 49
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
941

Towards Selective Ethylene Tetramerization

Shaikh, Yacoob 21 August 2012 (has links)
There is an increasing trend towards advancing the understanding and development of ethylene oligomerization catalysts, both in academia and industry. The metal of choice in this chemistry is invariably chromium, which has shown great versatility in selective trimerization/tetramerization, non-selective oligomerization and polymerization of ethylene. While much success has been achieved in ethylene trimerization, the same con not be said about tetramerization catalysis. Aminophosphine based ligands have demonstrated their ability towards selective 1-octene production, however, the popular PNP catalyst is able to achieve only 70% selectivity. In order to explore the possibility of developing and enhancing the selectivity of chromium based ethylene tetramerization catalyst, this thesis work was undertaken. The ligand systems we chose for our work were bidentate aminophosphine based (PN(CH2)nNP), which has yielded interesting selective oligomerization. Subtle modifications were found to result in drastic changes in selectivity, from tetramerization (PN(CH2)3NP) to trimerization (PN(CH2)2NP). We managed to successfully develop the first truly selective (over 90%) 1-octene catalyst with polymer-free behavior. Further modifications on the ligand framework, where one atom of Si was used to link the two NP units, resulted in non-selective oligomerization, in which case we determined that the oxidation-state of chromium is a key player. We explored other modifications on our selective ligands in which one of the arms on the bidentate ligand was replaced with a base-donor amine, phosphine or pyridine, and resulted in interesting selectivity changes. The final modification that we tested was a novel N(CH2)2P ligand and found it to be a highly active, non-selective oligomerization catalyst.
942

Potential alternative sources of funding South Africa’s land redistribution programme in its agricultural sector

Britain-Renecke, Cézanne January 2011 (has links)
No description available.
943

Étude de nanostructures semiconductrices pour la photonique quantique : Polaritons de microcavité sous excitation à deux photons et sources de photons uniques avec des nanocristaux colloïdaux.

Leménager, Godefroy 07 December 2012 (has links) (PDF)
Mon travail de thèse a porté sur l'étude du confi nement des photons et des électrons dans plusieurs systèmes. Tout d'abord, j'ai étudié un nouveau type de nanocristaux semiconducteurs pour obtenir -à température ambiante- une source e fficace de photons uniques polarisés. Ensuite, j'ai développé une technique d'excitation à deux photons pour des polaritons de microcavités semiconductrices. Ces nanocristaux semiconducteurs présentent la particularité d'avoir une coquille semiconductrice allongée en sulfure de cadmium (CdS) autour d'un coeur sphérique en séléniure de cadmium (CdSe). Depuis plus de dix ans, les nanocristaux semiconducteurs sont connus pour être des émetteurs efi caces de photons uniques à température ambiante. Leur photoluminescence sou rait de deux défauts : le clignotement, qui est une alternance entre des états allumés et éteints, ainsi qu'une très faible polarisation de leur émission. Lors de mon étude, en agissant sur les paramètres géométriques des nanocristaux (diamètre du c÷ur et longueur de la coquille) j'ai obtenu l'émission de photons uniques fortement polarisés (taux de polarisation linéaire supérieur à 80%) et montré le lien entre la polarisation et le rapport d'aspect des nanocristaux. De plus, en ajustant fi nement l'épaisseur de la coquille, j'ai démontré la possibilité de supprimer drastiquement le clignotement, tout en gardant une source de photons uniques de très haute qualité (g(2) < 0:2). Dans la deuxième partie de mon travail de thèse, je me suis intéressé au couplage fort lumière-matière dans des microcavités et micropiliers semiconducteurs. J'ai développé et caractérisé un nouveau type d'excitation pour les polaritons basé sur une absorption à deux photons. Dans le cas des micropiliers, où les polaritons sont con nés dans un système 0D, nous avons démontré un e et laser sous pompage à deux photons. La relaxation du système et les interactions entre polaritons sont comparées sous différentes géométries d'excitation.
944

Riktlinjer för att förbättra datakvaliteten hos data warehouse system / Guiding principles to improve data quality in data warehouse system

Carlswärd, Martin January 2008 (has links)
Data warehouse system är något som har växt fram under 1990-talet och det har implementeras hos flera verksamheter. De källsystem som en verksamhet har kan integreras ihop med ett data warehouse system för att skapa en version av verkligheten och ta fram rapporter för beslutsunderlag. Med en version av verkligheten menas att det skapas en gemensam bild som visar hur verksamhetens dagliga arbete sker och utgör grundinformation för de framtagna analyserna från data warehouse systemet. Det blir därför väsenligt för verksamheten att de framtagna rapporterna håller en, enligt verksamheten, tillfredställande god datakvalitet. Detta leder till att datakvaliteten hos data warehouse systemet behöver hålla en tillräckligt hög kvalitetsnivå. Om datakvaliteten hos beslutsunderlaget brister kommer verksamheten inte att ta de optimala besluten för verksamheten utan det kan förekomma att beslut tas som annars inte hade tagits. Att förbättra datakvaliteten hos data warehouse systemet blir därför centralt för verksamheten. Med hjälp av kvalitetsfilosofin Total Quality Management, TQM, har verksamheten ett stöd för att kunna förbättra datakvaliteten eftersom det möjliggör att ett helhetsgrepp om kvaliteten kan tas. Anledningen till att ta ett helhetsperspektiv angående datakvaliteten är att orsakerna till bristande datakvalitet inte enbart beror på orsaker inom själva data warehouse systemet utan beror även på andra orsaker. De kvalitetsförbättrande åtgärder som behöver utföras inom verksamheter varierar eftersom de är situationsanpassade beroende på hur verksamheten fungerar även om det finns mer övergripande gemensamma åtgärder. Det som kommuniceras i form av exempelvis rapporter från data warehouse systemet behöver anses av verksamhetens aktörer som förståeligt och trovärdigt. Anledningen till det är att de framtagna beslutunderlagen behöver vara förståliga och trovärdiga för mottagaren av informationen. Om exempelvis det som kommuniceras i form av rapporter innehåller skräptecken bli det svårt för mottagaren att anse informationen som trovärdig och förståelig. Förbättras kvaliteten hos det kommunikativa budskapet, det vill säga om kommunikationskvaliteten förbättras, kommer datakvaliteten hos data warehouse systemet i slutändan också förbättras. Inom uppsatsen har det tagits fram riktlinjer för att kunna förbättra datakvaliteten hos data warehouse system med hjälp av kommunikationskvaliteten samt TQM. Riktlinjernas syfte är att förbättra datakvaliteten genom att förbättra kvaliteten hos det som kommuniceras inom företagets data warehouse system. Det finns olika åtgärder som är situationsanpassade för att förbättra datakvaliteten med hjälp av kommunikationskvalitet. Ett exempel är att införa en möjlighet för mottagaren att få reda på vem som är sändaren av informationsinnehållet hos de framtagna rapporterna. Detta för att mottagaren bör ha möjlighet att kritisera och kontrollera den kommunikativa handlingen med sändaren, som i sin tur har möjlighet att försvara budskapet. Detta leder till att öka trovärdigheten hos den kommunikativa handlingen. Ett annat exempel är att införa inmatningskontroller hos källsystemen för att undvika att aktörer matar in skräptecken som sedan hamnar i data warehouse systemet. Detta leder till att mottagarens förståelse av det som kommuniceras förbättras. / The data warehouse system is something that has grown during the 1990s and has been implemented in many companies. The operative information system that a company has, can be integrated with a data warehouse system to build one version of the reality and take forward the decision basis. This means that a version of the reality creates a common picture that show how the company’s daily work occurs and constitutes the base of information for the created analysis reports from the data warehouse system. It is therefore important for a company that the reports have an acceptable data quality. This leads to that the data quality in the data warehouse system needs to hold an acceptable level of high quality. If data quality at the decision basis falls short, the company will not take the optimal decision for the company. Instead the company will take decision that normally would not have been taken. To improve the data quality in the data warehouse system would therefore be central for the company. With help from a quality philosophy, like TQM, the company have support to improve the data quality since it makes it possible for wholeness about the quality to be taken. The reason to take a holistic perspective about the data quality is because lacking of the data quality not only depends on reasons in the data warehouse system, but also on other reasons. The measurement of the quality improvement which needs to perform in the company depends on the situation on how the company works even in the more overall actions. The communication in form of for example reports from the data warehouse system needs to be understandable and trustworthy for the company’s actors. The reason is that the decision basis needs to be understandable and trustworthy for the receiver of the information. If for example the communication in form of reports contains junk characters it gets difficulty for the receiver of the information to consider if it is trustworthy and understandable. If the quality in the communication message is improving, videlicet that the communication quality improves, the data quality in the data warehouse will also improve in the end. In the thesis has guiding principles been created with the purpose to improve data quality in a data warehouse system with help of communication quality and TQM. Improving the quality in the communication, which is performed at the company’s data warehouse to improve the data quality, does this. There are different measures that are depending on the situations to improve the data quality with help of communication quality. One example is to introduce a possibility for the receiver to get information about who the sender of the information content in the reports is. This is because the receiver needs to have the option to criticize and control the communication acts with the sender, which will have the possibility to defend the message. This leads to a more improved trustworthy in the communication act. Another example is to introduce input controls in the operative system to avoid the actors to feed junk characters that land in the data warehouse system. This leads to that the receivers understanding of the communication improves.
945

Field Oriented Control Of A Permanent Magnet Synchronous Motor Using Space Vector Modulated Direct Ac-ac Matrix Converter

Yildirim, Dogan 01 May 2012 (has links) (PDF)
The study designs and constructs a three-phase to three-phase direct AC&ndash / AC matrix converter based surface mounted permanent magnet synchronous motor (PMSM) drive system. First, the matrix converter topologies are analyzed and the state-space equations describing the system have been derived in terms of the input and output variables. After that, matrix converter commutation and modulation methods are investigated. A four-step commutation technique based on output current direction provides safe commutation between the matrix converter switches. Then, the matrix converter is simulated for both the open-loop and the closed-loop control. For the closed-loop control, a current regulator (PI controller) controls the output currents and their phase angles. Advanced pulse width modulation and control techniques, such as space vector pulse width modulation and field oriented control, have been used for the closed-loop control of the system. Next, a model of diode-rectified two-level voltage source inverter is developed for simulations. A comparative study of indirect space vector modulated direct matrix converter and space vector modulated diode-rectified two-level voltage source inverter is given in terms of input/output waveforms to verify that the matrix converter fulfills the two-level voltage source inverter operation. Following the verification of matrix converter operation comparing with the diode-rectified two-level voltage source inverter, the simulation model of permanent magnet motor drive system is implemented. Also, a direct matrix converter prototype is constructed for experimental verifications of the results. As a first step in experimental works, filter types are investigated and a three-phase input filter is constructed to reduce the harmonic pollution. Then, direct matrix converter power circuitry and gate-driver circuitry are designed and constructed. To control the matrix switches, the control algorithm is implemented using a DSP and a FPGA. This digital control system measures the output currents and the input voltages with the aid of sensors and controls the matrix converter switches to produce the required PWM pattern to synthesize the reference input current and output voltage vectors, as well. Finally, the simulation results are tested and supported by laboratory experiments involving both an R-L load and a permanent magnet synchronous motor load. During the tests, the line-to-line supply voltage is set to 26 V peak value and a 400 V/3.5 kW surface mounted permanent magnet motor is used.
946

Datenzentrierte Bestimmung von Assoziationsregeln in parallelen Datenbankarchitekturen

Legler, Thomas 15 August 2009 (has links) (PDF)
Die folgende Arbeit befasst sich mit der Alltagstauglichkeit moderner Massendatenverarbeitung, insbesondere mit dem Problem der Assoziationsregelanalyse. Vorhandene Datenmengen wachsen stark an, aber deren Auswertung ist für ungeübte Anwender schwierig. Daher verzichten Unternehmen auf Informationen, welche prinzipiell vorhanden sind. Assoziationsregeln zeigen in diesen Daten Abhängigkeiten zwischen den Elementen eines Datenbestandes, beispielsweise zwischen verkauften Produkten. Diese Regeln können mit Interessantheitsmaßen versehen werden, welche dem Anwender das Erkennen wichtiger Zusammenhänge ermöglichen. Es werden Ansätze gezeigt, dem Nutzer die Auswertung der Daten zu erleichtern. Das betrifft sowohl die robuste Arbeitsweise der Verfahren als auch die einfache Auswertung der Regeln. Die vorgestellten Algorithmen passen sich dabei an die zu verarbeitenden Daten an, was sie von anderen Verfahren unterscheidet. Assoziationsregelsuchen benötigen die Extraktion häufiger Kombinationen (EHK). Hierfür werden Möglichkeiten gezeigt, Lösungsansätze auf die Eigenschaften moderne System anzupassen. Als Ansatz werden Verfahren zur Berechnung der häufigsten $N$ Kombinationen erläutert, welche anders als bekannte Ansätze leicht konfigurierbar sind. Moderne Systeme rechnen zudem oft verteilt. Diese Rechnerverbünde können große Datenmengen parallel verarbeiten, benötigen jedoch die Vereinigung lokaler Ergebnisse. Für verteilte Top-N-EHK auf realistischen Partitionierungen werden hierfür Ansätze mit verschiedenen Eigenschaften präsentiert. Aus den häufigen Kombinationen werden Assoziationsregeln gebildet, deren Aufbereitung ebenfalls einfach durchführbar sein soll. In der Literatur wurden viele Maße vorgestellt. Je nach den Anforderungen entsprechen sie je einer subjektiven Bewertung, allerdings nicht zwingend der des Anwenders. Hierfür wird untersucht, wie mehrere Interessantheitsmaßen zu einem globalen Maß vereinigt werden können. Dies findet Regeln, welche mehrfach wichtig erschienen. Der Nutzer kann mit den Vorschlägen sein Suchziel eingrenzen. Ein zweiter Ansatz gruppiert Regeln. Dies erfolgt über die Häufigkeiten der Regelelemente, welche die Grundlage von Interessantheitsmaßen bilden. Die Regeln einer solchen Gruppe sind daher bezüglich vieler Interessantheitsmaßen ähnlich und können gemeinsam ausgewertet werden. Dies reduziert den manuellen Aufwand des Nutzers. Diese Arbeit zeigt Möglichkeiten, Assoziationsregelsuchen auf einen breiten Benutzerkreis zu erweitern und neue Anwender zu erreichen. Die Assoziationsregelsuche wird dabei derart vereinfacht, dass sie statt als Spezialanwendung als leicht nutzbares Werkzeug zur Datenanalyse verwendet werden kann. / The importance of data mining is widely acknowledged today. Mining for association rules and frequent patterns is a central activity in data mining. Three main strategies are available for such mining: APRIORI , FP-tree-based approaches like FP-GROWTH, and algorithms based on vertical data structures and depth-first mining strategies like ECLAT and CHARM. Unfortunately, most of these algorithms are only moderately suitable for many “real-world” scenarios because their usability and the special characteristics of the data are two aspects of practical association rule mining that require further work. All mining strategies for frequent patterns use a parameter called minimum support to define a minimum occurrence frequency for searched patterns. This parameter cuts down the number of patterns searched to improve the relevance of the results. In complex business scenarios, it can be difficult and expensive to define a suitable value for the minimum support because it depends strongly on the particular datasets. Users are often unable to set this parameter for unknown datasets, and unsuitable minimum-support values can extract millions of frequent patterns and generate enormous runtimes. For this reason, it is not feasible to permit ad-hoc data mining by unskilled users. Such users do not have the knowledge and time to define suitable parameters by trial-and-error procedures. Discussions with users of SAP software have revealed great interest in the results of association-rule mining techniques, but most of these users are unable or unwilling to set very technical parameters. Given such user constraints, several studies have addressed the problem of replacing the minimum-support parameter with more intuitive top-n strategies. We have developed an adaptive mining algorithm to give untrained SAP users a tool to analyze their data easily without the need for elaborate data preparation and parameter determination. Previously implemented approaches of distributed frequent-pattern mining were expensive and time-consuming tasks for specialists. In contrast, we propose a method to accelerate and simplify the mining process by using top-n strategies and relaxing some requirements on the results, such as completeness. Unlike such data approximation techniques as sampling, our algorithm always returns exact frequency counts. The only drawback is that the result set may fail to include some of the patterns up to a specific frequency threshold. Another aspect of real-world datasets is the fact that they are often partitioned for shared-nothing architectures, following business-specific parameters like location, fiscal year, or branch office. Users may also want to conduct mining operations spanning data from different partners, even if the local data from the respective partners cannot be integrated at a single location for data security reasons or due to their large volume. Almost every data mining solution is constrained by the need to hide complexity. As far as possible, the solution should offer a simple user interface that hides technical aspects like data distribution and data preparation. Given that BW Accelerator users have such simplicity and distribution requirements, we have developed an adaptive mining algorithm to give unskilled users a tool to analyze their data easily, without the need for complex data preparation or consolidation. For example, Business Intelligence scenarios often partition large data volumes by fiscal year to enable efficient optimizations for the data used in actual workloads. For most mining queries, more than one data partition is of interest, and therefore, distribution handling that leaves the data unaffected is necessary. The algorithms presented in this paper have been developed to work with data stored in SAP BW. A salient feature of SAP BW Accelerator is that it is implemented as a distributed landscape that sits on top of a large number of shared-nothing blade servers. Its main task is to execute OLAP queries that require fast aggregation of many millions of rows of data. Therefore, the distribution of data over the dedicated storage is optimized for such workloads. Data mining scenarios use the same data from storage, but reporting takes precedence over data mining, and hence, the data cannot be redistributed without massive costs. Distribution by special data semantics or user-defined selections can produce many partitions and very different partition sizes. The handling of such real-world distributions for frequent-pattern mining is an important task, but it conflicts with the requirement of balanced partition.
947

Riktlinjer för att förbättra datakvaliteten hos data warehouse system / Guiding principles to improve data quality in data warehouse system

Carlswärd, Martin January 2008 (has links)
<p>Data warehouse system är något som har växt fram under 1990-talet och det har implementeras hos flera verksamheter. De källsystem som en verksamhet har kan integreras ihop med ett data warehouse system för att skapa en version av verkligheten och ta fram rapporter för beslutsunderlag. Med en version av verkligheten menas att det skapas en gemensam bild som visar hur verksamhetens dagliga arbete sker och utgör grundinformation för de framtagna analyserna från data warehouse systemet. Det blir därför väsenligt för verksamheten att de framtagna rapporterna håller en, enligt verksamheten, tillfredställande god datakvalitet. Detta leder till att datakvaliteten hos data warehouse systemet behöver hålla en tillräckligt hög kvalitetsnivå. Om datakvaliteten hos beslutsunderlaget brister kommer verksamheten inte att ta de optimala besluten för verksamheten utan det kan förekomma att beslut tas som annars inte hade tagits.</p><p>Att förbättra datakvaliteten hos data warehouse systemet blir därför centralt för verksamheten. Med hjälp av kvalitetsfilosofin Total Quality Management, TQM, har verksamheten ett stöd för att kunna förbättra datakvaliteten eftersom det möjliggör att ett helhetsgrepp om kvaliteten kan tas. Anledningen till att ta ett helhetsperspektiv angående datakvaliteten är att orsakerna till bristande datakvalitet inte enbart beror på orsaker inom själva data warehouse systemet utan beror även på andra orsaker. De kvalitetsförbättrande åtgärder som behöver utföras inom verksamheter varierar eftersom de är situationsanpassade beroende på hur verksamheten fungerar även om det finns mer övergripande gemensamma åtgärder.</p><p>Det som kommuniceras i form av exempelvis rapporter från data warehouse systemet behöver anses av verksamhetens aktörer som förståeligt och trovärdigt. Anledningen till det är att de framtagna beslutunderlagen behöver vara förståliga och trovärdiga för mottagaren av informationen. Om exempelvis det som kommuniceras i form av rapporter innehåller skräptecken bli det svårt för mottagaren att anse informationen som trovärdig och förståelig. Förbättras kvaliteten hos det kommunikativa budskapet, det vill säga om kommunikationskvaliteten förbättras, kommer datakvaliteten hos data warehouse systemet i slutändan också förbättras. Inom uppsatsen har det tagits fram riktlinjer för att kunna förbättra datakvaliteten hos data warehouse system med hjälp av kommunikationskvaliteten samt TQM. Riktlinjernas syfte är att förbättra datakvaliteten genom att förbättra kvaliteten hos det som kommuniceras inom företagets data warehouse system.</p><p>Det finns olika åtgärder som är situationsanpassade för att förbättra datakvaliteten med hjälp av kommunikationskvalitet. Ett exempel är att införa en möjlighet för mottagaren att få reda på vem som är sändaren av informationsinnehållet hos de framtagna rapporterna. Detta för att mottagaren bör ha möjlighet att kritisera och kontrollera den kommunikativa handlingen med sändaren, som i sin tur har möjlighet att försvara budskapet. Detta leder till att öka trovärdigheten hos den kommunikativa handlingen. Ett annat exempel är att införa inmatningskontroller hos källsystemen för att undvika att aktörer matar in skräptecken som sedan hamnar i data warehouse systemet. Detta leder till att mottagarens förståelse av det som kommuniceras förbättras.</p> / <p>The data warehouse system is something that has grown during the 1990s and has been implemented in many companies. The operative information system that a company has, can be integrated with a data warehouse system to build one version of the reality and take forward the decision basis. This means that a version of the reality creates a common picture that show how the company’s daily work occurs and constitutes the base of information for the created analysis reports from the data warehouse system. It is therefore important for a company that the reports have an acceptable data quality. This leads to that the data quality in the data warehouse system needs to hold an acceptable level of high quality. If data quality at the decision basis falls short, the company will not take the optimal decision for the company. Instead the company will take decision that normally would not have been taken.</p><p>To improve the data quality in the data warehouse system would therefore be central for the company. With help from a quality philosophy, like TQM, the company have support to improve the data quality since it makes it possible for wholeness about the quality to be taken. The reason to take a holistic perspective about the data quality is because lacking of the data quality not only depends on reasons in the data warehouse system, but also on other reasons. The measurement of the quality improvement which needs to perform in the company depends on the situation on how the company works even in the more overall actions.</p><p>The communication in form of for example reports from the data warehouse system needs to be understandable and trustworthy for the company’s actors. The reason is that the decision basis needs to be understandable and trustworthy for the receiver of the information. If for example the communication in form of reports contains junk characters it gets difficulty for the receiver of the information to consider if it is trustworthy and understandable. If the quality in the communication message is improving, videlicet that the communication quality improves, the data quality in the data warehouse will also improve in the end. In the thesis has guiding principles been created with the purpose to improve data quality in a data warehouse system with help of communication quality and TQM. Improving the quality in the communication, which is performed at the company’s data warehouse to improve the data quality, does this.</p><p>There are different measures that are depending on the situations to improve the data quality with help of communication quality. One example is to introduce a possibility for the receiver to get information about who the sender of the information content in the reports is. This is because the receiver needs to have the option to criticize and control the communication acts with the sender, which will have the possibility to defend the message. This leads to a more improved trustworthy in the communication act. Another example is to introduce input controls in the operative system to avoid the actors to feed junk characters that land in the data warehouse system. This leads to that the receivers understanding of the communication improves.</p>
948

Etude et réalisation d'une source<br />térahertz accordable de grande pureté<br />spectrale

Czarny, Romain 29 June 2007 (has links) (PDF)
La génération d'onde THz de grande pureté spectrale par photomélange est une technique très prometteuse afin de réaliser des oscillateurs locaux THz performants. Nous avons donc proposé une approche originale consistant à associer un laser bi-fréquence émettant autour de 1 µm à un photomélangeur de bande interdite compatible. Le choix de cette longueur d'onde permet la réalisation de lasers pompés diodes compacts et peu onéreux ainsi que l'utilisation de photoconducteurs présentant les propriétés électriques requises. <br />Ainsi, nous avons développé 2 lasers bi-fréquence amplifiés utilisant des milieux actifs (KGW et CaF2) dopés Yb dont l'utilisation permet de générer des puissances optiques supérieurs à 1 W ainsi qu'un signal de battement électrique continu de bonne pureté spectrale (<30 kHz).<br />Nous avons ensuite étudié et caractérisé 2 matériaux photoconducteurs compatibles avec une illumination à 1 µm : l'InGaAsN et l'In.23Ga.77As-BT épitaxié à basse température (BT) sur substrat métamorphique et dopé Be. Les propriétés de ces deux matériaux ont été étudiées et comparées avec celles du GaAs-BT.<br />Après avoir modélisé le fonctionnement de photomélangeurs (en prenant en compte la participation des trous) nous avons effectué des expériences de photomélange : nous avons détecté un signal de quelques dizaines de nW dont la fréquence a pu être accordée jusqu'à 2 THz.<br />Enfin, nous avons proposé un nouveau type de photomélangeur guide vertical. Les modélisations ont montré que la puissance THz émise (0,2 mW à 1 THz), l'accordabilité (0-3 THz) et la pureté spectrale du signal généré (< 30 KHz) de cette source devraient en faire une des plus attractive dans cette gamme de fréquence.
949

Caractérisations cristallographiques et magnétiques de nouvelles phases oxygènées a base de ruthénium. Filiations structurales avec les perovskites

Dussarrat, Christian 22 July 1996 (has links) (PDF)
L'investigation du système BaBiO3-BaRuO3 a permis de mettre en évidence trois nouvelles phases qui cristallisent dans trois différents polytypes de la perovskite. L'interprétation des propriétés magnétiques de ces phases, ainsi que des considérations cristallochimiques, montrent qu'il s'établit un phénomène de transfert de charge entre le ruthénium et le bismuth selon le schéma réactionnel: 2 Ru4+ + Bi5+->2 Ru5+ + Bi3+. L'activité catalytique de ces composés pour la réduction de NO a été étudiée. Cinq nouveaux composés ont ete isoles dans le système Ba(Sr)-Ru-O. Leurs structures sont caractérisées par des groupements [Ru3O12] ou [Ru2O9], formant des systèmes de basse dimensionnalité (clusters, système 2D). Des modèles structuraux ont été développés et ont permis d'établir des filiations structurales avec des structures type telles que K2NiF4, la perovskite 2H et Sr4PtO6. Les propriétés magnétiques ont été étudiées et pour certaines phases, interpretées en termes d'entités isolées selon un modèle d'Heisenberg.
950

Traitement de données dans les groupes de Lie : une approche algébrique. Application au recalage non-linéaire et à l'imagerie du tenseur de diffusion

Arsigny, Vincent 29 November 2006 (has links) (PDF)
Ces dernières années, le besoin de cadres rigoureux pour traiter des données non-linéaires s'est développé considérablement en imagerie médicale. Ici, nous avons proposé plusieurs cadres généraux pour traiter certains de ces types de données, qui appartiennent à des groupes de Lie. Pour ce faire, nous nous sommes appuyés sur les propriétés algébriques de ces espaces. Ainsi, nous avons présenté un cadre de traitement général pour les matrices symétriques définies positives, appelé log-euclidien, très simple à utiliser et avec d'excellentes propriétés théoriques ; il est particulièrement adapté au traitement des images de tenseurs de diffusion. Nous avons également proposé des cadres, dits polyaffines, pour paramétrer les transformations localement rigides ou affines, en garantissant leur inversibilité avec d'excellentes propriétés théoriques. Leur utilisation est illustrée avec succès dans le cas du recalage localement rigide de coupes histologiques et du recalage 3D localement affine d'IRMs du cerveau humain. Ce travail nous a menés à proposer deux cadres généraux nouveaux pour le calcul de statistiques dans les groupes de Lie en dimension finie : d'abord le cadre log-euclidien, qui généralise notre travail sur les tenseurs, et un cadre basé sur la notion nouvelle de moyenne bi-invariante, dont les propriétés généralisent celles de la moyenne arithmétique des espaces euclidiens. Enfin, nous avons généralisé notre cadre log-euclidien aux déformations géométriques difféomorphes afin de permettre un calclul simple des statistiques sur ces transformations, ce qui ouvre la voie à un cadre général et cohérent pour les statistiques en anatomie computationnelle.

Page generated in 0.032 seconds