• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 43
  • 41
  • 22
  • 10
  • 10
  • 9
  • 8
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 377
  • 115
  • 68
  • 63
  • 58
  • 47
  • 34
  • 34
  • 33
  • 32
  • 30
  • 30
  • 30
  • 30
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Do students have negative stereotypes of women who elect to have breast augmentation?

Bly, Nicole 01 January 2007 (has links)
This study explored whether women who have breast augmentation are more vulnerable to negative stereotypes about their intelligence and abilities than women who do not elect to have breast augmentation. Furthermore, whether the motivation for seeking such surgery contributes to stereotypes was also explored. Participants in this study read experimentally manipulated biographies of a fictitious female author, followed by her short story. The only manipulation was one line in the author's biography which varied the reason for the plastic surgery across 4 conditions (for herself, for her significant other, for medical reconstructive reasons, or no surgery). The participants were asked to read and evaluate the author and her work. The results of this study suggest that women who elect to have plastic surgery for herself or her husband are perceived as being less deep. The results also suggest that women who have previously elected to have plastic surgery judged the authors' fictional work more critically when she had plastic surgery for herself or for her husband.
162

Mélanges bayésiens de modèles d'extrêmes multivariés : application à la prédétermination régionale des crues avec données incomplètes / Bayesian model mergings for multivariate extremes : application to regional predetermination of floods with incomplete data

Sabourin, Anne 24 September 2013 (has links)
La théorie statistique univariée des valeurs extrêmes se généralise au cas multivarié mais l'absence d'un cadre paramétrique naturel complique l'inférence de la loi jointe des extrêmes. Les marges d'erreur associée aux estimateurs non paramétriques de la structure de dépendance sont difficilement accessibles à partir de la dimension trois. Cependant, quantifier l'incertitude est d'autant plus important pour les applications que le problème de la rareté des données extrêmes est récurrent, en particulier en hydrologie. L'objet de cette thèse est de développer des modèles de dépendance entre extrêmes, dans un cadre bayésien permettant de représenter l'incertitude. Le chapitre 2 explore les propriétés des modèles obtenus en combinant des modèles paramétriques existants, par mélange bayésien (Bayesian Model Averaging BMA). Un modèle semi-paramétrique de mélange de Dirichlet est étudié au chapitre suivant : une nouvelle paramétrisation est introduite afin de s'affranchir d'une contrainte de moments caractéristique de la structure de dépendance et de faciliter l'échantillonnage de la loi à posteriori. Le chapitre 4 est motivé par une application hydrologique : il s'agit d'estimer la structure de dépendance spatiale des crues extrêmes dans la région cévenole des Gardons en utilisant des données historiques enregistrées en quatre points. Les données anciennes augmentent la taille de l'échantillon mais beaucoup de ces données sont censurées. Une méthode d'augmentation de données est introduite, dans le cadre du mélange de Dirichlet, palliant l'absence d'expression explicite de la vraisemblance censurée. Les conclusions et perspectives sont discutées au chapitre 5 / Uni-variate extreme value theory extends to the multivariate case but the absence of a natural parametric framework for the joint distribution of extremes complexifies inferential matters. Available non parametric estimators of the dependence structure do not come with tractable uncertainty intervals for problems of dimension greater than three. However, uncertainty estimation is all the more important for applied purposes that data scarcity is a recurrent issue, particularly in the field of hydrology. The purpose of this thesis is to develop modeling tools for the dependence structure between extremes, in a Bayesian framework that allows uncertainty assessment. Chapter 2 explores the properties of the model obtained by combining existing ones, in a Bayesian Model Averaging framework. A semi-parametric Dirichlet mixture model is studied next : a new parametrization is introduced, in order to relax a moments constraint which characterizes the dependence structure. The re-parametrization significantly improves convergence and mixing properties of the reversible-jump algorithm used to sample the posterior. The last chapter is motivated by an hydrological application, which consists in estimating the dependence structure of floods recorded at four neighboring stations, in the ‘Gardons’ region, southern France, using historical data. The latter increase the sample size but most of them are censored. The lack of explicit expression for the likelihood in the Dirichlet mixture model is handled by using a data augmentation framework
163

Agrégation de ressources avec contrainte de distance : applications aux plateformes de grande échelle / Resource clustering with distance constraint : applications to large scale platforms

Larchevêque, Hubert 27 September 2010 (has links)
Durant cette thèse, nous avons introduit les problèmes de Bin Covering avec Contrainte de Distance (BCCD) et de Bin Packing avec Contrainte de Distance (BPCD), qui trouvent leur application dans les réseaux de grande échelle, tel Internet. L'étude de ces problèmes que nous effectuons dans des espaces métriques quelconques montre qu'il est impossible de travailler dans un tel cadre sans avoir recours à de l'augmentation de ressources, un procédé qui permet d'élaborer des algorithmes construisant des solutions moins contraintes que la solution optimale à laquelle elles sont comparées. En plus de résultats d'approximation intéressants, nous prouvons la difficulté de ces problèmes si ce procédé n'est pas utilisé. Par ailleurs, de nombreux outils ont pour objectif de plonger les grands réseaux qui nous intéressent dans des espaces métriques bien décrits. Nous avons alors étudié nos problèmes dans plusieurs espaces métriques spécifiques, et, en particulier, ceux générés par certains de ces outils, comme Vivaldi et Sequoia. / During this Ph.D we introduced Bin Covering under Distance Constraint (BCCD in French) and Bin Packing under Distance Constraint (BPCD in French). Those two problems find their applications in the context of large scale networks, like Internet. We studied those problems in general metric spaces, and proved that using resource augmentation is mandatory. Resource augmentation allows to build algorithms working on solutions with less constraints than the optimal solution to which it is compared to. We found interesting approximations algorithms using this relaxation, and proved the necessity of this resource augmentation. However many tools are used to embed large networks we are interested in in specific metric spaces. Thus we studied those problems in different specific metric spaces, in particular those generated by the use of Vivaldi and Sequoia, two of those tools.
164

Knochenregeneration mit mikrofixierten Titanbarrieren an zahnlosen und implantattragenden Kieferabschnitten / Bone regeneration with microfixated titanium barriers on edentulous and implant-placed alveolar ridge sites.

Wyszkowski, Agatha 29 June 2010 (has links)
No description available.
165

Bio-augmentation de l'activité nitrifiante des boues activées / Bio-increase of the activity nitrifiante activated sludges

Souef, Antoine 29 January 2009 (has links)
La bio-augmentation des stations d’épuration est parfois envisagée pour faire face à des situations transitoires au cours desquelles la capacité nitrifiante des boues activées est insuffisante pour traiter efficacement la charge reçue par l’installation. Le succès de cettestratégie repose sur l’activité nitrifiante de la biomasse ajoutée ainsi que sur sa capacitéd’adaptation et de maintien dans les boues activées en place.L’objectif de ce travail était d’évaluer la possibilité d’adaptation des micro-organismes d’une solution commerciale nitrifiante dans une station d’épuration municipale à très faible charge.Les populations nitrifiantes composant la solution étudiée ont été comparées à celles des boues activées de treize stations d’épurations, échantillonnées en été et en hiver. Une démarche couplant des outils de la biologie moléculaire et du génie des procédés a été mise en oeuvre afin d’analyser et de comparer ces populations selon trois axes : diversité, abondance et activité. Cette comparaison a été complétée par une expérience de bio-augmentation en réacteur fermé. D’après les profils obtenus en DGGE, les communautés nitrosantes des stations d’épuration à très faible charge sont généralement dominées par une même espèce appartenant au groupe Nitrosomonas oligotropha.Les biomasses nitrosantes et nitratantes ont été chacune estimées entre 0,5 et 3,5 % de la biomasse totale des boues activées par marquage FISH. La solution commerciale comporteune seule espèce nitrosante, du groupe N. europaea, qui représente 45 % de sa biomasse.Les analyses cinétiques, réalisées par suivi direct des vitesses de réaction (analyses chimiques) et/ou par respirométrie, indiquent que N. europaea adopte une stratégie de type –r contrairement aux bactéries autochtones des boues activées (stratégie de type –K). Ces résultats expérimentaux laissent supposer une faible capacité d’adaptation de la biomasse nitrifiante de cette solution commerciale aux conditions environnementales des stationsd’épuration à très faible charge. En revanche, l’expérience de bio-augmentation a montré sa capacité d’intégration aux flocs, et le maintien de son activité lors de son mélange à des boues activées. / Bio-augmentation is sometimes proposed to cope with transient situations in wastewatertreatment plants, when the nitrifying capacity of the autochthonous biomass is insufficient totreat the nitrogen load.The success of such a strategy is conditioned by the nitrifying activity of the added biomass,its adaptation to local conditions and its residence time in the system.This work aimed at evaluating the adaptation of nitrifying microorganisms from a commercialsolution in very low loaded municipal wastewater treatment plant. Nitrifying populations ofthe studied solution were compared to those of thirteen wastewater treatment plants (activatedsludge), observed in summer and in winter. The analysis and the comparison of the biomasseswere performed using tools from two disciplines: molecular biology and processengineering. Three points were especially characterised: the diversity, abundance and activityof the biomasses. A bio-augmentation experiment performed in a batch reactor completed thecomparison. DGGE profiles showed that Ammonia Oxidising Bacteria (AOB) communities from lowloaded wastewater treatment plants are mostly dominated by the same species related to theNitrosomonas oligotropha.The abundance of ammonia oxidising bacteria and of nitrite oxidising bacteria, estimatedusing the FISH technique, was almost similar, ranging from 0.5 to 3.5 % of the total biomass.The AOB species present in the commercial solution is identified as Nitrosomonas europaea,and represents 45 % of the solution’s total biomass.Kinetic analyses carried out by chemical analysis and/or by respirometry indicate that N.europaea behave as –r strategist, whereas activated sludge’s bacteria are –K strategist. These results suggest a slight adaptation capacity of the commercial solution’s nitrifyingbiomass to environmental conditions prevailing in low loaded municipal wastewater treatmentplants. On the other hand, bio-augmentation experiment showed the capacity of the addedmicroorganisms to integrate the activated sludge flocs and to maintain its activity when addedto activated sludge.
166

Analysis and Detection of Ionospheric Depletions over the Indian Region in the Context of Satellite Navigation

Joshi, Prachi January 2013 (has links) (PDF)
Satellites have revolutionized navigation by making it more universal, accessible and ac- curate. Global Positioning System (GPS) is the most widely used satellite navigation system in the world. However, it is prone to errors from various sources such as the ionosphere, troposphere and clock biases. In order to make the system very accurate and reliable, especially to meet the requirements of safety-critical applications, Satellite Based Augmentation Systems (SBAS) have recently been designed in various countries to augment the GPS by providing corrections for its errors. An Indian SBAS called GAGAN (GPS Aided Geo Augmented Navigation), developed for the Airports Authority of India (AAI) by Indian Space Research Organization (ISRO) is currently being installed and proven for aviation and other use. The uncertain propagation delay of signals through the ionosphere is the most important contributor of error in GPS positioning, its maximal elimination is a major task of SBAS overlays. Ionospheric delays have steady, cyclic, and irregular components. The last types are of particular concern because they are unpredictable. This thesis deals with ionospheric depletion, an important phenomenon of this class that is specific to tropical regions like India and hence have not been well studied in the context of other SBAS systems of the world which cover mid-latitude domains. Depletion is an ionospheric phenomenon in which the density of electrons dips suddenly and then returns close to the previous value. It poses a challenge to the model adopted for ionospheric delay estimation since it may not be detectable by ground systems be- cause of its localized nature, and its occurrence and intensity cannot be predicted. In this work we have analyzed the depletion characteristics over the Indian region such as its distribution, frequency of occurrence, and depth and duration parameters. We have then studied and implemented an existing algorithm to detect a depletion from the Total Electron Content (TEC) data. This algorithm has been found to be inaccurate for estimation of depletion duration, and we have proposed an improved algorithm for depletion detection and shown it to be more suitable for the Indian SBAS, GAGAN. The algorithm utilizes multiple thresholds for depletion detection in order to improve performance in the presence of irregularities including noise. These thresholds are determined by analyzing real TEC data containing depletion events over the Indian region. The detected depletion events are those that have a strong likelihood of contributing large range errors and degrading GAGAN's reliability. The thresholds include depletion parameters such as the depth, duration, rate of change of TEC, and the rate of change of slope of the TEC curve. The characterization of depletion events over the Indian region yielded useful insights into the behaviour of the phenomenon. It was observed that the depletion events were invariably present post-sunset, between 1900 and 0200 hrs. This observation is consistent with the other studies on plasma bubbles so far. The average depth of the depletion was found to be about 3.31 meters of propagation delay while the strongest depletion corresponds to about 5.04 meters of delay. The latter observation impresses upon the need to detect and study the phenomenon of depletion since it is capable of causing a significant loss of accuracy and reliability to the system. The duration of the depletion was found to range from about 10 min to 2.35 hours. In addition, a statistical study of the relationship among the different parameters and a study devoted to now-casting of depletion was made to get a more quantitative insight into the phenomenon of depletion. Scintillation is another phenomenon occurring in the ionosphere which causes rapid fluctuations of phase and amplitude of the signal due to TEC variations in the ionosphere. The occurrences of depletion were observed to be accompanied by scintillation, as also noted in previous studies. The correlation of depletion and scintillation was studied using the data available for this research. A spatial characterization of the depletion events was also investigated using the same temporal TEC data from neighbouring stations which were relatively close to each other. This study addressed the movement of the plasma bubble with respect to the advection speed and direction with definite results. Attention was also devoted to the spatial dimension of the bubble as observed from various stations. Contributions to this variability in the apparent spatial extent comes from the observation of the depletion event from varying lines-of-sight corresponding to different GPS satellites which are also moving, and the differential `slicing' effect because of the location of the stations with respect to the plasma bubble, in addition to the evolution of the bubble during transit. The detection of depletion and its temporal characterization, in addition to the knowledge of its spatial extent and motion, can provide very useful insights on the behaviour of a depletion event and over the ionosphere in general. This knowledge and the mechanism for detection can help to improve the quality and dependability of the information provided by SBAS systems, in particular the Indian GAGAN system, for improved navigation in this part of the world. The present thesis aims to make a significant contribution in this direction.
167

Methods for data and user efficient annotation for multi-label topic classification / Effektiva annoteringsmetoder för klassificering med multipla klasser

Miszkurka, Agnieszka January 2022 (has links)
Machine Learning models trained using supervised learning can achieve great results when a sufficient amount of labeled data is used. However, the annotation process is a costly and time-consuming task. There are many methods devised to make the annotation pipeline more user and data efficient. This thesis explores techniques from Active Learning, Zero-shot Learning, Data Augmentation domains as well as pre-annotation with revision in the context of multi-label classification. Active ’Learnings goal is to choose the most informative samples for labeling. As an Active Learning state-of-the-art technique Contrastive Active Learning was adapted to a multi-label case. Once there is some labeled data, we can augment samples to make the dataset more diverse. English-German-English Backtranslation was used to perform Data Augmentation. Zero-shot learning is a setup in which a Machine Learning model can make predictions for classes it was not trained to predict. Zero-shot via Textual Entailment was leveraged in this study and its usefulness for pre-annotation with revision was reported. The results on the Reviews of Electric Vehicle Charging Stations dataset show that it may be beneficial to use Active Learning and Data Augmentation in the annotation pipeline. Active Learning methods such as Contrastive Active Learning can identify samples belonging to the rarest classes while Data Augmentation via Backtranslation can improve performance especially when little training data is available. The results for Zero-shot Learning via Textual Entailment experiments show that this technique is not suitable for the production environment. / Klassificeringsmodeller som tränas med övervakad inlärning kan uppnå goda resultat när en tillräcklig mängd annoterad data används. Annoteringsprocessen är dock en kostsam och tidskrävande uppgift. Det finns många metoder utarbetade för att göra annoteringspipelinen mer användar- och dataeffektiv. Detta examensarbete utforskar tekniker från områdena Active Learning, Zero-shot Learning, Data Augmentation, samt pre-annotering, där annoterarens roll är att verifiera eller revidera en klass föreslagen av systemet. Målet med Active Learning är att välja de mest informativa datapunkterna för annotering. Contrastive Active Learning utökades till fallet där en datapunkt kan tillhöra flera klasser. Om det redan finns några annoterade data kan vi utöka datamängden med artificiella datapunkter, med syfte att göra datasetet mer mångsidigt. Engelsk-Tysk-Engelsk översättning användes för att konstruera sådana artificiella datapunkter. Zero-shot-inlärning är en teknik i vilken en maskininlärningsmodell kan göra förutsägelser för klasser som den inte var tränad att förutsäga. Zero-shot via Textual Entailment utnyttjades i denna studie för att utöka datamängden med artificiella datapunkter. Resultat från datamängden “Reviews of Electric Vehicle Charging ”Stations visar att det kan vara fördelaktigt att använda Active Learning och Data Augmentation i annoteringspipelinen. Active Learning-metoder som Contrastive Active Learning kan identifiera datapunkter som tillhör de mest sällsynta klasserna, medan Data Augmentation via Backtranslation kan förbättra klassificerarens prestanda, särskilt när få träningsdata finns tillgänglig. Resultaten för Zero-shot Learning visar att denna teknik inte är lämplig för en produktionsmiljö.
168

Object detection for autonomous trash and litter collection / Objektdetektering för autonom skräpupplockning

Edström, Simon January 2022 (has links)
Trashandlitter discarded on the street is a large environmental issue in Sweden and across the globe. In Swedish cities alone it is estimated that 1.8 billion articles of trash are thrown to the street each year, constituting around 3 kilotons of waste. One avenue to combat this societal and environmental problem is to use robotics and AI. A robot could learn to detect trash in the wild and collect it in order to clean the environment. A key component of such a robot would be its computer vision system which allows it to detect litter and trash. Such systems are not trivially designed or implemented and have only recently reached high enough performance in order to work in industrial contexts. This master thesis focuses on creating and analysing such an algorithm by gathering data for use in a machine learning model, developing an object detection pipeline and evaluating the performance of that pipeline based on varying its components. Specifically, methods using hyperparameter optimisation, psuedolabeling and the preprocessing methods tiling and illumination normalisation were implemented and analysed. This thesis shows that it is possible to create an object detection algorithm with high performance using currently available state-of-the-art methods. Within the analysed context, hyperparameter optimisation did not significantly improve performance and psuedolabeling could only briefly be analysed but showed promising results. Tiling greatly increased mean average precision (mAP) for the detection of small objects, such as cigarette butts, but decreased the mAP for large objects and illumination normalisation improved mAPforimagesthat were brightly lit. Both preprocessing methods reduced the frames per second that a full detector could run at whilst psuedolabeling and hyperparameter optimisation greatly increased training times. / Skräp som slängs på marken har en stor miljöpåverkan i Sverige och runtom i världen. Enbart i Svenska städer uppskattas det att 1,8 miljarder bitar skräp slängs på gatan varje år, bestående av cirka 3 kiloton avfall. Ett sätt att lösa detta samhälleliga och miljömässiga problem är att använda robotik och AI. En robot skulle kunna lära siga att detektera skräp i utomhusmiljöer och samla in den för att på så sätt rengöra våra städer och vår natur. En nyckelkomponent av en sådan robot skulle vara dess system för datorseende som tillåter den att se och hitta skräp. Sådana system är inte triviala att designa eller implementera och har bara nyligen påvisat tillräckligt hög prestanda för att kunna användas i kommersiella sammanhang. Detta masterexamensarbete fokuserar på att skapa och analysera en sådan algoritm genom att insamla data för att använda i en maskininlärningsmodell, utveckla en objektdetekterings pipeline och utvärdera prestandan när dess komponenter modifieras. Specifikt analyseras metoderna pseudomarkering, hyperparameter optimering samt förprocesseringsmetoderna kakling och ljusintensitetsnormalisering. Examensarbetet visar att det är möjligt att skapa en objektdetekteringsalgoritm med hög prestanda med hjälp av den senaste tekniken på området. Inom det undersökta sammanhanget gav hyperparameter optimering inte någon större förbättring av prestandan och pseudomarkering kunde enbart ytligt analyseras men uppvisade preliminärt lovande resultat. Kakling förbättrade resultatet för detektering av små objekt, som cigarettfimpar, men minskade prestandan för större objekt och ljusintensitetsnormalisering förbättrade prestandan för bilder som var starkt belysta. Båda förprocesseringsmetoderna minskade bildhastigheten som en detektor skulle kunna köra i och psuedomarkering samt hyperparameter optimering ökade träningstiden kraftigt.
169

Sur la génération d'exemples pour réduire le coût d'annotation

Piedboeuf, Frédéric 03 1900 (has links)
L'apprentissage machine moderne s'appuie souvent sur l'utilisation de jeux de données massifs, mais il existe de nombreux contextes où l'acquisition et la manipulation de grandes données n'est pas possible, et le développement de techniques d'apprentissage avec de petites données est donc essentiel. Dans cette thèse, nous étudions comment diminuer le nombre de données nécessaires à travers deux paradigmes d'apprentissage~: l'augmentation de données et l'apprentissage par requête synthétisée. La thèse s'organise en quatre volets, chacun démontrant une nouvelle facette concernant la génération d'exemples pour réduire le coût d'annotation. Le premier volet regarde l'augmentation de données pour des textes en anglais, ce qui nous permet d'établir une comparaison objective des techniques et de développer de nouveaux algorithmes. Le deuxième volet regarde ensuite l'augmentation de données dans les langues autres que l'anglais, et le troisième pour la tâche de génération de mots-clés en français. Finalement, le dernier volet s'intéresse à l'apprentissage par requête synthétisée, où les exemples générés sont annotés, en contraste à l'augmentation de données qui produit des exemples sans coût d'annotation supplémentaire. Nous montrons que cette technique permet de meilleures performances, particulièrement lorsque le jeu de données est large et l'augmentation de données souvent inefficace. / Modern machine learning often relies on the use of massive datasets, but there are many contexts where acquiring and handling large data is not feasible, making the development of techniques for learning with small data essential. In this thesis, we investigate how to reduce the amount of data required through two learning paradigms~: data augmentation and membership query synthesis. The thesis is organized into four parts, each demonstrating a new aspect of generating examples to reduce annotation costs. The first part examines data augmentation for English text, allowing us to make an objective comparison of techniques and develop new algorithms. The second one then explores data augmentation in languages other than English, and the third focuses on the task of keyword generation in French. Finally, the last part delves into membership query synthesis, where generated examples are annotated, in contrast to data augmentation, which produces examples without additional annotation costs. We show that this technique leads to better performance, especially when the dataset is large and data augmentation is often ineffective.
170

Effect of a diffuser on the power production of an ocean current turbine

Reinecke, Josh 03 1900 (has links)
Thesis (MScEng (Mechanical and Mechatronic Engineering))--University of Stellenbosch, 2011. / Please refer to full text to view abstract.

Page generated in 0.1573 seconds