• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 36
  • 22
  • 17
  • 8
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 198
  • 38
  • 21
  • 21
  • 17
  • 16
  • 14
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Elaboration et caractérisation de couches de conversion de longueur d'onde pour le photovoltaïque / Fabrication and characterization of down-conversion materials in thin films for photovoltaic applications

Forissier, Sébastien 14 September 2012 (has links)
Les propriétés structurales et de luminescence de couches minces de TiO2 et Y2O3 dopées terres rares (thulium, terbium et ytterbium) ont été étudiées en vue de les intégrer dans une cellule photovoltaïque comme couche de conversion spectrale du proche UV vers l’infrarouge afin d’en améliorer l’efficacité. Ces couches minces ont été synthétisées par dépôt chimique en phase vapeur à pression atmosphérique à l’aide de précurseurs organo-métalliques et assisté par aérosol (aerosol assisted MOCVD). Les couches minces sont partiellement cristallisées dès la synthèse (400°C pour le TiO2 en phase anatase, 540°C pour Y2O3 en phase cubique). Après traitement thermique la cristallisation est largement améliorée et la luminescence des ions dopant terres rares est obtenue dans les deux matrices oxydes. Le thulium émet dans une large bande située vers 800 nm et l’ytterbium vers 980 nm. Le terbium quand à lui émet dans une gamme située principalement dans le visible. Les spectres d’excitation ont montré que l’absorption des photons se fait via la matrice. En matrice TiO2 une efficacité de transfert d’énergie du Tm3+ vers l’Yb3+ de l’ordre de 20 % a été déterminée pour des teneurs de 0,8 % des deux dopants, ce qui correspond à la limite d’auto-extinction. Le rendement global mesuré est faible, nous avons montré que les causes probables de cette faible valeur sont le manque d’absorption des couches minces pour obtenir l’excitation de l’ion sensibilisateur ainsi que des processus de luminescence et de down conversion pas assez efficaces. / Structural and luminescence properties of rare-earth-doped (thulium, terbium and ytterbium) thin films of yttrium oxide and titanium oxide were studied as a down-converting layer from near-UV to infrared for integration in solar cells to improve their yield. These thin films were synthesized by chemical vapor deposition at atmospheric pressure with organo-metallic precursors and assisted by aerosol (aerosol assisted MOCVD). The thin films were partially crystallized as deposited (400°C in the anatase phase for TiO2 , 540°C in the cubic phase for Y2O3). After annealing the crystallization is greatly improved and the rare-earth ion luminescence is obtained in both oxide matrices. The thulium emits in a large band centered around 800 nm and the ytterbium at 980 nm. The terbium emits mainly in the visible range. Excitation spectra showed that the photon absorption occurs in the matrix. In the TiO2 matrix a transfer rate from Tm to Yb of 20 % was measured for doping of 0,8 % for both rare-earth, which corresponds to the quenching limit. The overall measured yield is low, we showed that the probable reasons were the thin films’ lack of absorption to obtain the excitation of the sensitizer ion and a low efficiency of luminescence and down-conversion processes.
42

Preparação e caracterização de cerâmicas supercondutoras nos sistemas Y-Ba-Cu-O e Tm-Ba-Cu-O / Preparation and characterization of superconducting ceramics in the system Y-Ba-Cu-O and Tm-Ba-Cu-O

Airton Abrahao Martin 25 August 1988 (has links)
Neste trabalho estudamos a influencia da temperatura e tempo de reação e sinterização na preparação de amostras cerâmicas supercondutoras pelo método de reação no estado sólido. Os resultados indicam claramente que algumas propriedades destes supercondutores, tais como: temperatura crítica (Tc), susceptibilidade magnética (X), resistividade (&#961), microestruturas, densidade e porosidade aparente, sofrem forte influencia das condições de tratamento térmico. Foram preparadas várias amostras dos sistemas YBa2Cu3O6.5+x e TmBa2Cu3O6.5+x, sendo que a temperatura e tempo ideal de reação encontrados foram de 950&#176C por 6 horas e 925&#176C por 48 horas, respectivamente; ambas tratadas em fluxo de oxigênio. A caracterização destas amostras foram feitas pelas técnicas de difração de raios-x, técnica de quatro-pontas (medida da variação da resistividade pela temperatura), ponte de Hartshorn (para a medida da variação da susceptibilidade magnética pela temperatura), microscopia eletrônica de varredura (para análise das microestruturas) e método de imersão (para a medida da densidade e porosidade aparente). A maior temperatura crítica encontrada foi de aproximadamente 94K para YBa2Cu3O6.5+x e de aproximadamente 91K para o TmBa2Cu3O6.5+x / The influence of the temperature and time in the reaction and sinterization of superconducting ceramics prepared by a solid state reaction was determined. The results clearly showed that some of its properties, such as critical temperature (Tc), magnetic susceptibility (X), resistivity (&#961), microstructure, apparent density, and porosity undergo a strong influence of the preparation conditions. Some samples in the YBa2Cu3O6.5+x and TmBa2Cu3O6.5+x systems were prepared. The ideal reaction temperature and time were 950&#176C for 6 hours and 925&#176C for 48 hours, respectively. Both annealed in O2 flow. The sample characterization was made by using X-ray diffraction, standard four probe (measures the variation of resistivity versus temperature), Bridge of Hartshorn (the variation of susceptibility versus temperature), scanning electron micrograph (microstructure analysis), and immersion method (measures the apparent density and porosity). The greatest critical temperature was approximately 94K for YBa2Cu3O6.5+x and 91K for TmBa2Cu3O6.5+x
43

Development of the Taiwanese Mandarin Main Concept Analysis and Linguistic Communication Measure: Normative and Preliminary Aphasic Data

Yeh, Chun-chih 01 January 2014 (has links)
Aphasia is a language disorder resulting from damage to brain areas that control language expression and reception. Clinically, the narrative production of Persons with Aphasia (PWA) provides valuable information for diagnosis of aphasia. There are several types of assessment procedures for analysis of aphasic's narrative production. One of them is to use quantification systems, such as the Cantonese Linguistic Communication Measure (CLCM; Kong & Law, 2004) or the Main Concept Analysis (MCA; Kong, 2009), for objective quantification of aphasic's discourse. The purposes of this study are (1) to translate the MCA and CLCM to a Taiwanese Mandarin Main Concept Analysis (TM-MCA) and a Taiwanese Mandarin Linguistic Communication Measure (TM-LCM), respectively, and (2) to validate them based on normal speakers and PWA in Taiwan. In the pilot study, a total of sixteen participants, eight certified speech-language pathologists (SLPs) and eight normal speakers, were invited to establish the Taiwanese Mandarin main concepts related to the four sets of sequencial pictures created by Kong in 2009. The language samples from eight normal speakers were then used to determine the informative words (i-words) in the picture sets. In the main study, thirty-six normal speakers and ten PWA were recruited to perform the same picture description tasks. The elicited language samples were analyzed using both the TM-MCA and TM-LCM. The results suggested that both age and education affected the oral discourse performance. Significant differences on the measures in TM-MCA and indices in TM-LCM were noted between the normal and aphasic groups. It was also found that overall aphasia severity affected the picture description performances of PWA. Finally, significant correlations between some of the TM-MCA measures and TM-LCM indices were noted. In conclusion, both the TM-MCA and TM-LCM are culturally appropriate to the Taiwanese Mandarin population. They can be used to supplement standardized aphasia tests to help clinicians make more informative decisions not only on diagnosis but also on a treatment plan of aphasia.
44

Identifiering av UNO-kort : En jämförelse av bildigenkänningstekniker

Al-Asadi, Yousif, Streit, Jennifer January 2023 (has links)
Att spela sällskapsspelet UNO är en typ av umgängesform där målet är att trivas. EnUNO-kortlek har 5 olika färger (blå, röd, grön, gul och joker) och olika symboler.Detta kan vara frustrerande för en person med nedsatt färgseende att delta, då enstor andel av spelet är beroende av att identifiera färgen på varje kort. Övergripandesyftet med detta arbete är att utveckla en prototyp för objektigenkänning av UNOkort som stöd för färgnedsatta. Arbetet sker genom jämförelse av objektigenkänningsmetoder som Convolutional Neural Network (CNN) och Template Matchinginspirerade metoder: hue template test samt binary template test. Detta kommer attjämföras i samband med igenkänning av färg och symbol tillsammans och separerat. Utvecklandet av prototypen kommer att utföras genom att träna två olika CNNmodeller, där en modell fokuserar endast på symboler och den andra fokuserar påbåde färg och symbol. Dessa modeller kommer att tränas med hjälp av YOLOv5 algoritmen som anses vara State Of The Art (SOTA) inom CNN med snabb exekvering. Samtidigt kommer template test att utvecklas med hjälp av OpenCV och genom att skapa mallar för korten. Dessa används för att göra en jämförelse av kortetsom ska identifieras med hjälp av mallen. Utöver detta kommer K Nearest Neighbor(KNN), en maskininlärningsalgoritm att utvecklas med syfte att identifiera endastfärg på korten. Slutligen utförs en jämförelse mellan dessa metoder genom mätningav prestanda som består av accuracy, precision, recall och latency. Jämförelsen kommer att ske mellan varje metod genom en confusion matrix för färger och symbolerför respektive modell. Resultatet av studien visade på att modellen som kombinerar CNN och KNN presterade bäst vid valideringen av de olika metoderna. Utöver detta visar studien atttemplate test är snabbare att implementera än CNN på grund av tiden för träningensom ett neuralt nätverk kräver. Dessutom visar latency att det finns en skillnad mellan de olika modellerna, där CNN presterade bäst. / Engaging in the social game of UNO represents a form of social interaction aimed atpromoting enjoyment. Each UNO card deck consists of five different colors (blue,red, green, yellow and joker) and various symbols. However participating in such agame can be frustrating for individuals with color vision impairment. Since a substantial portion of the game relies on accurately identifying the color of each card.The overall purpose of this research is to develop a prototype for object recognitionof UNO cards to support individuals with color vision impairment. This thesis involves comparing object recognition methods, namely Convolutional Neural Network (CNN) and Template Matching (TM). Each method will be compared with respect to color and symbol recognition both separately and combined.   The development of such a prototype will be through creating and training two different CNN models, where the first model focuses on solely symbol recognitionwhile the other model incorporates both color and symbol recognition. These models will be trained though an algorithm called YOLOv5 which is considered state-ofthe-art (SOTA) with fast execution. At the same time, two models of TM inspiredmethods, hue template test and binary template test, will be developed with thehelp of OpenCV and by creating templates for the cards. Each template will be usedas a way to compare the detected card in order to classify it. Additionally, the KNearest Neighbor (KNN) algorithm, a machine learning algorithm, will be developed specifically to identify the color of the cards. Finally a comparative analysis ofthese methods will be conducted by evaluating performance metrics such as accuracy, precision, recall and latency. The comparison will be carried out in betweeneach method using a confusion matrix for color and symbol in respective models. The study’s findings revealed that the model combining CNN and KNN demonstrated the best performance during the validation of the different models. Furthermore, the study shows that template tests are faster to implement than CNN due tothe training that a neural network requires. Moreover, the execution time showsthat there is a difference between the different models, where CNN achieved thehighest performance.
45

Capteurs optiques intégrés basés sur des lasers à semiconducteur et des résonateurs en anneaux interrogés en intensité / Integrated optical sensors based on semiconductor lasers and ring resonators using intensity interrogation

Song, Jinyan 14 December 2012 (has links)
Ce travail de thèse porte sur la conception et la réalisation de capteurs optiques ultracompacts et sensibles utilisant le mode d’interrogation en intensité pour la détection d’analytes chimiques et biologiques. Deux approches, l’intégration hybride et l’intégration monolithique, ont été explorées durant cette thèse. Après un descriptif des outils d’analyse et de conception de guides d’onde et de micro résonateurs en anneaux, le manuscrit présente l’intégration hybride d’un laser Fabry-Perot en semiconducteur III-V avec un résonateur en anneau basé sur du matériau SOI. Le laser Fabry-Perot à faible coût fonctionnant en multimode longitudinal a été utilisé comme peigne de référence pour le résonateur en anneau en contact avec un échantillon liquide. L’effet Vernier a été implanté dans le système de détection en utilisant le mode d’interrogation en intensité. La largeur spectrale étroite du laser avec sa densité de puissance élevée ont permis d’obtenir un capteur de plus haute sensitivité en comparaison avec le capteur en double anneaux réalisé précédemment. Une étude numérique d’un capteur composé d’un laser Fabry-Perot et deux résonateurs en anneaux permettant de compenser la fluctuation de température a été ensuite présentée. Concernant l'intégration monolithique, l'interface entre oxyde et non-oxyde après l’oxydation de AlGaAs a été étudiée au Central de Technologies du LPN/CNRS. Un phénomène d’oxydation verticale de GaAs ou AlGaAs avec une faible teneur en aluminium activée par une couche voisine oxydée de AlGaAs avec une forte teneur en aluminium a été identifié expérimentalement. Afin de limiter l’oxydation verticale et de réduire la rugosité des interfaces, des guides d’onde basés respectivement sur une structure intégrant un super-réseau et sur une structure standard ont été fabriqués et caractérisés. L’impact de l'hydrogène sur l'activation du processus d'oxydation de GaAs ou AlGaAs avec une faible teneur en Al a été mis en évidence. Enfin, ce manuscrit décrit la réalisation et la caractérisation d’un laser Fabry-Perot fonctionnant en mode TM. Ce laser constitue une brique important vers l’intégration monolithique d’un capteur extrêmement sensible. / The objective of the thesis is to realize the integrated optical sensors with high sensitivity using intensity interrogation method for chemical and biological analyte detection. For this purpose, two approaches, hybrid integration and monolithic integration, have been explored theoretically and experimentally during this thesis. After a review of the design and analysis tools of optical waveguide and micro-ring resonators, the manuscript reports an experimental demonstration of a highly-sensitive intensity-interrogated optical sensor based on cascaded III-V semiconductor Fabry-Perot laser and silicon-on-insulator ring resonator. The low-cost easy-to-fabricate Fabry-Perot laser serves as a reference comb for the sensing ring in contact with liquid sample. The Vernier effet has been exploited in the detection scheme using intensity interrogation mode. The sharp emission peaks of the FP laser with high spectral power density result in a high sensitivity for the sensor compared to previously investigated all-passive double-ring sensor. The temperature compensation method has also been investigated numerically to improve the performance of the sensor. Concerning the potential monolithic integration of laser and sensing waveguide, the interface between oxide and non-oxide after wet oxidation of buried AlGaAs has been investigated at the Technology Centre of LPN/CNRS. The vertical oxidation of GaAs or AlGaAs with low Al content activated by a neighbouring oxidized Al-rich AlGaAs layer has been discovered experimentally. To limit the vertical oxidation and reduce the roughness of the interface, the waveguides with buried oxide layer on superlattice sample and standard sample have been fabricated and characterised. The key role of hydrogen incorporation in the activation of the oxidation process for GaAs or AlGaAs materials with low Al content has been shown experimentally. Finally, this thesis reports the fabrication and the characterisation results of a Fabry-Perot laser working on TM mode which is an important building block for highly-sensitive monolithically-integrated circuit.
46

Extracting Parallelism from Legacy Sequential Code Using Transactional Memory

Saad Ibrahim, Mohamed Mohamed 26 July 2016 (has links)
Increasing the number of processors has become the mainstream for the modern chip design approaches. However, most applications are designed or written for single core processors; so they do not benefit from the numerous underlying computation resources. Moreover, there exists a large base of legacy software which requires an immense effort and cost of rewriting and re-engineering to be made parallel. In the past decades, there has been a growing interest in automatic parallelization. This is to relieve programmers from the painful and error-prone manual parallelization process, and to cope with new architecture trend of multi-core and many-core CPUs. Automatic parallelization techniques vary in properties such as: the level of paraellism (e.g., instructions, loops, traces, tasks); the need for custom hardware support; using optimistic execution or relying on conservative decisions; online, offline or both; and the level of source code exposure. Transactional Memory (TM) has emerged as a powerful concurrency control abstraction. TM simplifies parallel programming to the level of coarse-grained locking while achieving fine-grained locking performance. This dissertation exploits TM as an optimistic execution approach for transforming a sequential application into parallel. The design and the implementation of two frameworks that support automatic parallelization: Lerna and HydraVM, are proposed, along with a number of algorithmic optimizations to make the parallelization effective. HydraVM is a virtual machine that automatically extracts parallelism from legacy sequential code (at the bytecode level) through a set of techniques including code profiling, data dependency analysis, and execution analysis. HydraVM is built by extending the Jikes RVM and modifying its baseline compiler. Correctness of the program is preserved through exploiting Software Transactional Memory (STM) to manage concurrent and out-of-order memory accesses. Our experiments show that HydraVM achieves speedup between 2×-5× on a set of benchmark applications. Lerna is a compiler framework that automatically and transparently detects and extracts parallelism from sequential code through a set of techniques including code profiling, instrumentation, and adaptive execution. Lerna is cross-platform and independent of the programming language. The parallel execution exploits memory transactions to manage concurrent and out-of-order memory accesses. This scheme makes Lerna very effective for sequential applications with data sharing. This thesis introduces the general conditions for embedding any transactional memory algorithm into Lerna. In addition, the ordered version of four state-of-art algorithms have been integrated and evaluated using multiple benchmarks including RSTM micro benchmarks, STAMP and PARSEC. Lerna showed great results with average 2.7× (and up to 18×) speedup over the original (sequential) code. While prior research shows that transactions must commit in order to preserve program semantics, placing the ordering enforces scalability constraints at large number of cores. In this dissertation, we eliminates the need for commit transactions sequentially without affecting program consistency. This is achieved by building a cooperation mechanism in which transactions can forward some changes safely. This approach eliminates some of the false conflicts and increases the concurrency level of the parallel application. This thesis proposes a set of commit order algorithms that follow the aforementioned approach. Interestingly, using the proposed commit-order algorithms the peak gain over the sequential non-instrumented execution in RSTM micro benchmarks is 10× and 16.5× in STAMP. Another main contribution is to enhance the concurrency and the performance of TM in general, and its usage for parallelization in particular, by extending TM primitives. The extended TM primitives extracts the embedded low level application semantics without affecting TM abstraction. Furthermore, as the proposed extensions capture common code patterns, it is possible to be handled automatically through the compilation process. In this work, that was done through modifying the GCC compiler to support our TM extensions. Results showed speedups of up to 4× on different applications including micro benchmarks and STAMP. Our final contribution is supporting the commit-order through Hardware Transactional Memory (HTM). HTM contention manager cannot be modified because it is implemented inside the hardware. Given such constraint, we exploit HTM to reduce the transactional execution overhead by proposing two novel commit order algorithms, and a hybrid reduced hardware algorithm. The use of HTM improves the performance by up to 20% speedup. / Ph. D.
47

Formulation, in vitro release and transdermal diffusion of diclofenac salts by implementation of the delivery gap principle / Hanri Smith

Smith, Hanri January 2013 (has links)
Nonsteroidal anti-inflammatory drugs (NSAIDs) are widely used in the treatment of inflammation and pain (Escribano et al., 2003:203). Diclofenac, a classical NSAID, is considerably more effective as an analgesic, antipyretic and anti-inflammatory drug than other traditional NSAIDs, like indomethacin and naproxen (Grosser et al., 2011:986). However, the use of diclofenac is known for its many side effects, such as gastric disorders, while fluid and sodium retention are also commonly observed (Rossiter, 2012:391). Since topical diclofenac offers a more favourable safety profile, it is a valuable substitute for oral NSAID therapy in the treatment of osteoarthritis (Roth & Fuller, 2011:166). The benefits of topically applied NSAIDs, compared to oral administration and systemic delivery, include the easy cessation of treatment, should effects become troublesome (Brown et al., 2006:177), the avoidance of extensive, first-pass metabolism (Cleary, 1993:19; Kornick, 2003:953; Prausnitz & Langer, 2008:1261; Lionberger & Brennan, 2010:225), reduced systemic side effects (Colin Long, 2002:41), convenience of application and improved patient compliance (Cleary, 1993:19; Prausnitz & Langer, 2008:1261). An approach that is often applied in optimising the solubility and dissolution rate of poorly water soluble, weak electrolytes is to prepare a salt of the active pharmaceutical ingredient (API) (Minghetti et al., 2007:815; O’Connor & Corrigan, 2001:281-282). Diclofenac is frequently administered as a salt, due to the high partition coefficient and very low water solubility of this molecule (Fini et al., 1999:164). Formulating for efficacy (FFETM) is a software programme designed by JW Solutions to facilitate the formulation of cosmetic ingredients or solvents into a product that would optimally deliver active ingredients into the skin. The notion is built upon solubility, i.e. solubility of the active ingredient in the formulation and solubility of the formulation in the skin. This programme could also be employed to optimise amounts of predetermined ingredients, to propose formulations that would ensure optimal drug delivery, to calculate the skin delivery gap (SDG) and to demonstrate transdermal permeation of active ingredients and excipients (JW Solutions Software, 2013a). When the SDG is known, it mathematically indicates the optimal active ingredient and topical delivery vehicle to use (JW Solutions, 2013b). In this study, diclofenac sodium (DNa), diclofenac diethylamine (DDEA) and diclofenac N-(2- hydroxyethyl) pyrrolidine (DHEP) were each formulated in the following emulgels: * An emulgel optimised towards the stratum corneum (SC) (enhancing drug delivery into this layer and deeper tissues) (oily phase ~30%), * A more hydrophilic emulgel (oily phase ~15%), and * A more lipophilic emulgel (oily phase ~45%). Components of the oily phase and its respective amounts, as well as the SDG of formulations were determined by utilising the FFETM software of JW Solutions (2013a). The aqueous solubilities of DNa, DDEA and DHEP were determined and their respective values were 11.4 mg/ml, 8.0 mg/ml and 11.9 mg/ml, all indicative of effortless percutaneous delivery (Naik et al., 2000:319). Log D (octanol-buffer distribution coefficient) (pH 7.4) determinations for DNa, DDEA and DHEP were performed and their values established at 1.270 (DNa), 1.291 (DDEA) and 1.285 (DHEP). According to these values, diclofenac, when topically applied as a salt in a suitable vehicle, should permeate transdermally without the aid of radical intervention (Naik et al., 2000:319; Walters, 2007:1312). Membrane release studies were also carried out in order to determine the rate of API release from these new formulations. Results confirmed that diclofenac was indeed released from all nine of the formulated emulgels. The more hydrophilic DNa formulation released the highest average percentage of diclofenac (8.38%) after 6 hours. Subsequent transdermal diffusion studies were performed to determine the diclofenac concentration that permeated the skin. The more hydrophilic DNa emulgel showed the highest average percentage skin diffusion (0.09%) after 12 hours, as well as the highest average flux (1.42 ± 0.20 μg/cm2.h). The concentrations of diclofenac in the SC-epidermis (SCE) and epidermis-dermis (ED) were determined through tape stripping experiments. The more lipophilic DNa emulgel demonstrated the highest average concentration (0.27 μg/ml) in the ED, while the DNa emulgel that had been optimised towards the SC, had the highest concentration in the SCE (0.77 μg/ml). / MSc (Pharmaceutics), North-West University, Potchefstroom Campus, 2014
48

Formulation, in vitro release and transdermal diffusion of diclofenac salts by implementation of the delivery gap principle / Hanri Smith

Smith, Hanri January 2013 (has links)
Nonsteroidal anti-inflammatory drugs (NSAIDs) are widely used in the treatment of inflammation and pain (Escribano et al., 2003:203). Diclofenac, a classical NSAID, is considerably more effective as an analgesic, antipyretic and anti-inflammatory drug than other traditional NSAIDs, like indomethacin and naproxen (Grosser et al., 2011:986). However, the use of diclofenac is known for its many side effects, such as gastric disorders, while fluid and sodium retention are also commonly observed (Rossiter, 2012:391). Since topical diclofenac offers a more favourable safety profile, it is a valuable substitute for oral NSAID therapy in the treatment of osteoarthritis (Roth & Fuller, 2011:166). The benefits of topically applied NSAIDs, compared to oral administration and systemic delivery, include the easy cessation of treatment, should effects become troublesome (Brown et al., 2006:177), the avoidance of extensive, first-pass metabolism (Cleary, 1993:19; Kornick, 2003:953; Prausnitz & Langer, 2008:1261; Lionberger & Brennan, 2010:225), reduced systemic side effects (Colin Long, 2002:41), convenience of application and improved patient compliance (Cleary, 1993:19; Prausnitz & Langer, 2008:1261). An approach that is often applied in optimising the solubility and dissolution rate of poorly water soluble, weak electrolytes is to prepare a salt of the active pharmaceutical ingredient (API) (Minghetti et al., 2007:815; O’Connor & Corrigan, 2001:281-282). Diclofenac is frequently administered as a salt, due to the high partition coefficient and very low water solubility of this molecule (Fini et al., 1999:164). Formulating for efficacy (FFETM) is a software programme designed by JW Solutions to facilitate the formulation of cosmetic ingredients or solvents into a product that would optimally deliver active ingredients into the skin. The notion is built upon solubility, i.e. solubility of the active ingredient in the formulation and solubility of the formulation in the skin. This programme could also be employed to optimise amounts of predetermined ingredients, to propose formulations that would ensure optimal drug delivery, to calculate the skin delivery gap (SDG) and to demonstrate transdermal permeation of active ingredients and excipients (JW Solutions Software, 2013a). When the SDG is known, it mathematically indicates the optimal active ingredient and topical delivery vehicle to use (JW Solutions, 2013b). In this study, diclofenac sodium (DNa), diclofenac diethylamine (DDEA) and diclofenac N-(2- hydroxyethyl) pyrrolidine (DHEP) were each formulated in the following emulgels: * An emulgel optimised towards the stratum corneum (SC) (enhancing drug delivery into this layer and deeper tissues) (oily phase ~30%), * A more hydrophilic emulgel (oily phase ~15%), and * A more lipophilic emulgel (oily phase ~45%). Components of the oily phase and its respective amounts, as well as the SDG of formulations were determined by utilising the FFETM software of JW Solutions (2013a). The aqueous solubilities of DNa, DDEA and DHEP were determined and their respective values were 11.4 mg/ml, 8.0 mg/ml and 11.9 mg/ml, all indicative of effortless percutaneous delivery (Naik et al., 2000:319). Log D (octanol-buffer distribution coefficient) (pH 7.4) determinations for DNa, DDEA and DHEP were performed and their values established at 1.270 (DNa), 1.291 (DDEA) and 1.285 (DHEP). According to these values, diclofenac, when topically applied as a salt in a suitable vehicle, should permeate transdermally without the aid of radical intervention (Naik et al., 2000:319; Walters, 2007:1312). Membrane release studies were also carried out in order to determine the rate of API release from these new formulations. Results confirmed that diclofenac was indeed released from all nine of the formulated emulgels. The more hydrophilic DNa formulation released the highest average percentage of diclofenac (8.38%) after 6 hours. Subsequent transdermal diffusion studies were performed to determine the diclofenac concentration that permeated the skin. The more hydrophilic DNa emulgel showed the highest average percentage skin diffusion (0.09%) after 12 hours, as well as the highest average flux (1.42 ± 0.20 μg/cm2.h). The concentrations of diclofenac in the SC-epidermis (SCE) and epidermis-dermis (ED) were determined through tape stripping experiments. The more lipophilic DNa emulgel demonstrated the highest average concentration (0.27 μg/ml) in the ED, while the DNa emulgel that had been optimised towards the SC, had the highest concentration in the SCE (0.77 μg/ml). / MSc (Pharmaceutics), North-West University, Potchefstroom Campus, 2014
49

Étude du couplage adiabatique entre deux guides d'ondes ayant une forte différence d'indice de réfraction

Bédard, Sylvain January 2009 (has links)
Ce document présente les étapes de fabrication ainsi que la caractérisation d'un coupleur optique composé de deux guides d'ondes d'indice de réfraction éloigné. Le guide d'ondes avec le plus fort indice de réfraction est en Silicium (Si) d'indice de réfraction de 3,44 et celui de plus faible indice est fait de SU-8(TM) dont l'indice de réfraction est de 1,565. La principale caractéristique du coupleur est sa géométrie. Le couplage se fait par une approche progressive des deux guides d'ondes. Le guide d'onde de SU-8(TM) progresse vers le guide de Si en empruntant un rayon de courbure constant dans la zone de couplage pour finalement superposer le guide de Si lorsque les parcours optiques sont parallèles. Par l'approche progressive, le transfert optique se fera de façon adiabatique. Une étude numérique a montré qu'il était possible d'atteindre un coefficient de couplage de plus de 30%. Bien que les résultats expérimentaux ne soient pas aussi encourageants, ils montrent la présence non négligeable de couplage.
50

Is talent management just old wine in new bottles? : the case of multinational corporations in Beijing

Chuai, Xin January 2008 (has links)
Talent Management (TM), as a new managerial concept with regard to Human Resource Management (HRM), has increasingly gained concern and attention from the academic as well as business world, but there are many gaps and omissions left for further theoretical development and empirical study. Hence, understanding the differences between TM and HRM becomes necessary. Given an absence of clarity in the literature, the aim of the present study is to gain a thorough understanding of TM among Multinational Corporations (MNCs) in Beijing, to explore to what extent this managerial idea represents anything new, and to find out why organisations adopt TM. A case study method was selected as the main research methodology. The study was undertaken in Beijing, and the target companies were limited to four MNCs, respectively from the IT (two organisations), health care and education industries, and three consultancy companies. The theoretical perspective largely draws upon the literature on TM, management fashion and institutional theory. Findings show that the topic of TM has been enthusiastically pursued. However, there is not a single concise definition shared by all the case study organisations, even though different strands of understanding regarding TM are explored in this study. The thesis has also explored what is distinctive about TM, and the factors and purposes influencing the adoption of TM in China. Through comparing HRM with literature and empirical findings relating to TM, this thesis has found that TM seems to presage some new approaches to the management of the people resource in organisations, rather than a simple repackaging of old techniques and ideas with a new label. Meanwhile, this thesis strongly challenges the idea that TM is another struggle by HR professionals to enhance their legitimacy, status and credibility. Therefore, TM should not be considered as ‘old wine in new bottles’, at least with respect to the case of MNCs in China.

Page generated in 0.4029 seconds