• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 572
  • 129
  • 96
  • 92
  • 87
  • 37
  • 25
  • 21
  • 20
  • 19
  • 19
  • 18
  • 6
  • 6
  • 5
  • Tagged with
  • 1274
  • 335
  • 194
  • 191
  • 190
  • 174
  • 149
  • 115
  • 105
  • 93
  • 84
  • 83
  • 79
  • 75
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Relações tróficas e limnológicas no reservatório de Itaipu: uma análise do impacto da biomassa pesqueira nas comunidades planctônicas / Trophic and limnological interactions in the Itaipu reservoir: an analysis of the impact of the fishing biomass in the planktonic communities

Ribeiro Filho, Rinaldo Antonio 20 November 2006 (has links)
Inúmeros estudos experimentais contribuíram para o desenvolvimento da teoria das redes tróficas lacustres, revelando o importante papel dos peixes, ignorados no campo da limnologia durante décadas. A maioria dos estudos foi desenvolvida na Europa e Estados Unidos e a generalidade da teoria ainda não foi amplamente testada em ambientes subtropicais e tropicais. Apesar das controversas, as hipóteses de cascata trófica e de bottom-up : top-down são os modelos conceituais mais utilizados. Neste contexto, este trabalho baseia-se na hipótese de que as relações tróficas em cascata, relacionadas aos efeitos bottom-up e top-down, podem ser encontradas no Reservatório de Itaipu, inserido em região subtropical. Com base em dados disponibilizados pela Itaipu Binacional, para o período de 1999 a 2004. Análises de Covariância (ANCOVAs) foram realizadas para determinar as relações das variáveis dependentes clorofila-a, cianobactéria e transparência da água sobre as demais variáveis limnológicas. Análises de correlações também foram realizadas para verificar a existência de algum tipo de efeito na cascata trófica. Os resultados obtidos caracterizam o reservatório como mesotrófico, com situações de oligotrofia na zona lacustre. Efeito negativo foi verificado para transparência da água em relação aos sólidos suspensos totais e turbidez. A clorofila apresentou relação positiva com nitrogênio total, mas não para fósforo total. Os peixes onívoros, detritívoros e insetívoros exerceram efeito negativo (controle) na densidade das cianobactérias e na concentração de clorofila. A estimativa da produção de peixes no reservatório foi melhor relacionada com as cianobactérias do reservatório, gerando um modelo estatístico através desta análise. Os efeitos top-down e bottom-up foram confirmados, sendo que as forças top-down foram encontradas apenas no primeiro nível trófico e as demais apresentaram efeito bottom-up. / Many experimental studies contributed for the development of the theory of the lacustrine food web, disclosing the important paper of the fish, for decades ignored by the limnology. These studies, in its majority, had been developed in Europe and North America and the generality of the hypotheses supporting the theory was not tested yet in subtropical and tropical environments. Although controversial, the trophic cascade and bottom-up: top-down hypotheses are the two main conceptual models presently used. This work is based on the hypothesis of the trophic cascade relations, particularly related to bottom-up and top-down effect can be detected in Itaipu Reservoir, a subtropical system. Using the made data available by Itaipu Binacional, from 1999 to 2004. Analyses of Covariance (ANCOVAs) were accomplished to determine the relationships of the dependent variable on the chlorophyll-a, cyanobacterias and transparency of the water (independent variables) and between the other limnologycal variables. Analyses of correlations were also accomplished to verify the existence of some effect type in the trophic cascade. The results show that the reservoir presented mesotrophic, and oligotrophic and characteristics in the lacustrine area. A negative effect was verified between the water transparency in relation to suspended solids and turbid. The chlorophyll presented positive relationship with total nitrogen, but didn\'t present any relationship with the total phosphorus. The omnivorous, and detritivorous and insectivorous fishes presented a negative effect (control) in the cyanobacteria concentrations and chlorophyll. The estimate of the production of fish in the reservoir was better related with the concentrations of cyanobacterias of the reservoir, generating a statistical model through this analysis. The top-down and bottom-up effect had been confirmed, the forces top-down had been found only in the first trophic level, and the others presented bottom-up effect.
252

Measurement of the Production Cross-Section of Single Top Quarks in Association with W Bosons at ATLAS

Giorgi, Francesco Michelangelo 27 July 2017 (has links)
Das Ziel der vorgelegten Arbeit ist die Messung des Wirkungsquerschnittes der elektroschwach vermittelten Produktion einzelner Top Quarks in Assoziation mit einem W-Boson. Dieser Prozess wird auch abkürzend als Wt-Kanal bezeichnet. Die Vermessung dieses Produktionskanals stellt einen Test der Standardmodell-Vorhersage dar und bietet gleichzeitig die Möglichkeit durch einen Vergleich mit anderen Produktionskanälen für einzelne Top Quarks (t- und s-Kanal) Rückschlüsse auf neue Physik jenseits des Standardmodells zu ziehen. Nach einer allgemeinen Einführung zur Physik des Top Quarks folgt eine Beschreibung der für die Detektion und Rekonstruktion physikalischer Objekte wichtigen Systeme des ATLAS-Detektors.Anschlie{\ss}end wird die Analyse der Proton-Proton-Kollisions Daten die im Jahr 2011 vom ATLAS Detektor am Large Hadron Collider (LHC) augzeichnet wurden präsentiert.Diese Daten wurden bei einer Schwerpunktsenergie von 7 TeV aufgezeichnet und haben einen Umfang von 4.7 inverse femtobarn.Da die Produktionsrate des Wt-Kanals am LHC sehr klein im Vergleich zu seinem Hauptuntergrund ist, wurde ein Chi-Quadrat-basierter kinematischer Fit entwickelt um die Identifikation der Signalereignisse zu begünstigen. Hierbei werden W-Boson und Top-Quark aus den Endzustandsteilchen rekonstruiert und durch Bedingungen hinsichtlich der W-Boson und Top Quark Masse evaluiert. Der errechnete Chi-Quadrat-Wert gibt die Wahrscheinlichkeit an, mit der das einzelne Ereignis mit der Signal-Hypothese übereinstimmt und kann als Schnittvariable verwendet werden um eine striktere Ereignisselektion zu erhalten. Die Messung ist von systematischen Unsicherheiten dominiert, die fast 100 % des gemessenen Wirkungsquerschnitts betragen. / The work reported in this thesis is aimed at measuring the cross section of electroweak single top quark production in association with a W boson, a process also referred to as Wt-channel. The interest in this production mechanism relies in the confirmation of the Standard Model predictions together with the possibility of identifying new physics phenomena when comparing its cross section with the one of the other single top production modes (the t- and s-channel.) After providing a general introduction on the physics of the top quark and a description of the experimental setup employed for the detection and the reconstruction of the physics objects, the analysis of 4.7 femtobarn of proton-proton collision data at a centre-of-mass energy of 7 TeV, recorded by the ATLAS detector at the Large Hadron Collider in the year 2011, is presentedSince the Wt channel production rate at the LHC is considerably smaller than its main background, a chi-squared based kinematic fit has been developed to help the identification of the signal events allowing the use of simultaneous mass constraints from the W boson and the top quark populating the final states. The chi-squared value in each event is then used as a parameter to rank the event in terms of its probability to match or not the signal hypothesis and a cut on its value is used to implement a first tight event selection. The final selection step consists of requiring that the system composed by the top quark and the W boson reconstructed by the fit is balanced in the transverse plane. The measurement is found to be dominated by the systematic uncertainties which affect it by an amount close to 100% of the cross section value.
253

Préservation de la confidentialité des données externalisées dans le traitement des requêtes top-k / Privacy preserving top-k query processing over outsourced data

Mahboubi, Sakina 21 November 2018 (has links)
L’externalisation de données d’entreprise ou individuelles chez un fournisseur de cloud, par exemple avec l’approche Database-as-a-Service, est pratique et rentable. Mais elle introduit un problème majeur: comment préserver la confidentialité des données externalisées, tout en prenant en charge les requêtes expressives des utilisateurs. Une solution simple consiste à crypter les données avant leur externalisation. Ensuite, pour répondre à une requête, le client utilisateur peut récupérer les données cryptées du cloud, les décrypter et évaluer la requête sur des données en texte clair (non cryptées). Cette solution n’est pas pratique, car elle ne tire pas parti de la puissance de calcul fournie par le cloud pour évaluer les requêtes.Dans cette thèse, nous considérons un type important de requêtes, les requêtes top-k, et le problème du traitement des requêtes top-k sur des données cryptées dans le cloud, tout en préservant la vie privée. Une requête top-k permet à l’utilisateur de spécifier un nombre k de tuples les plus pertinents pour répondre à la requête. Le degré de pertinence des tuples par rapport à la requête est déterminé par une fonction de notation.Nous proposons d’abord un système complet, appelé BuckTop, qui est capable d’évaluer efficacement les requêtes top-k sur des données cryptées, sans avoir à les décrypter dans le cloud. BuckTop inclut un algorithme de traitement des requêtes top-k qui fonctionne sur les données cryptées, stockées dans un nœud du cloud, et retourne un ensemble qui contient les données cryptées correspondant aux résultats top-k. Il est aidé par un algorithme de filtrage efficace qui est exécuté dans le cloud sur les données chiffrées et supprime la plupart des faux positifs inclus dans l’ensemble renvoyé. Lorsque les données externalisées sont volumineuses, elles sont généralement partitionnées sur plusieurs nœuds dans un système distribué. Pour ce cas, nous proposons deux nouveaux systèmes, appelés SDB-TOPK et SD-TOPK, qui permettent d’évaluer les requêtes top-k sur des données distribuées cryptées sans avoir à les décrypter sur les nœuds où elles sont stockées. De plus, SDB-TOPK et SD-TOPK ont un puissant algorithme de filtrage qui filtre les faux positifs autant que possible dans les nœuds et renvoie un petit ensemble de données cryptées qui seront décryptées du côté utilisateur. Nous analysons la sécurité de notre système et proposons des stratégies efficaces pour la mettre en œuvre.Nous avons validé nos solutions par l’implémentation de BuckTop, SDB-TOPK et SD-TOPK, et les avons comparé à des approches de base par rapport à des données synthétiques et réelles. Les résultats montrent un excellent temps de réponse par rapport aux approches de base. Ils montrent également l’efficacité de notre algorithme de filtrage qui élimine presque tous les faux positifs. De plus, nos systèmes permettent d’obtenir une réduction significative des coûts de communication entre les nœuds du système distribué lors du calcul du résultat de la requête. / Outsourcing corporate or individual data at a cloud provider, e.g. using Database-as-a-Service, is practical and cost-effective. But it introduces a major problem: how to preserve the privacy of the outsourced data, while supporting powerful user queries. A simple solution is to encrypt the data before it is outsourced. Then, to answer a query, the user client can retrieve the encrypted data from the cloud, decrypt it, and evaluate the query over plaintext (non encrypted) data. This solution is not practical, as it does not take advantage of the computing power provided by the cloud for evaluating queries.In this thesis, we consider an important kind of queries, top-k queries,and address the problem of privacy-preserving top-k query processing over encrypted data in the cloud.A top-k query allows the user to specify a number k, and the system returns the k tuples which are most relevant to the query. The relevance degree of tuples to the query is determined by a scoring function.We first propose a complete system, called BuckTop, that is able to efficiently evaluate top-k queries over encrypted data, without having to decrypt it in the cloud. BuckTop includes a top-k query processing algorithm that works on the encrypted data, stored at one cloud node,and returns a set that is proved to contain the encrypted data corresponding to the top-k results. It also comes with an efficient filtering algorithm that is executed in the cloud on encypted data and removes most of the false positives included in the set returned.When the outsourced data is big, it is typically partitioned over multiple nodes in a distributed system. For this case, we propose two new systems, called SDB-TOPK and SD-TOPK, that can evaluate top-k queries over encrypted distributed data without having to decrypt at the nodes where they are stored. In addition, SDB-TOPK and SD-TOPK have a powerful filtering algorithm that filters the false positives as much as possible in the nodes, and returns a small set of encrypted data that will be decrypted in the user side. We analyze the security of our system, and propose efficient strategies to enforce it.We validated our solutions through implementation of BuckTop , SDB-TOPK and SD-TOPK, and compared them to baseline approaches over synthetic and real databases. The results show excellent response time compared to baseline approaches. They also show the efficiency of our filtering algorithm that eliminates almost all false positives. Furthermore, our systems yieldsignificant reduction in communication cost between the distributed system nodes when computing the query result.
254

Development of a data-driven algorithm to Determine the W+Jets Background in tt - events in ATLAS

Mehlhase, Sascha 30 August 2010 (has links)
Die Physik des Top-Quarks ist eine Schlüsselkomponente im Forschungsprogramm des ATLAS-Experiments am CERN. In dieser Arbeit werden Untersuchungen zur Leistungfähigkeit von Jet-Triggern für Top-Quark-Ereignisse präsentiert und zwei datenbasierte Methoden zur Abschätzung der Multijet-Triggereffizienz und des W+Jets-Untergrundes in Top-Quark-Ereignissen in ATLAS eingeführt. In einer tag-and-probe Methode, basierend auf einer einfachen und allgemeinen Ereignisselektion und einem hochenergetischen Lepton als Tag, wird die Möglichkeit zur Bestimmung der Multijet-Triggereffizienz aus Daten heraus evaluiert, und es wird gezeigt, dass die Methode in der Lage ist, die Effizienz ohne signifikante Verfälschung durch die Tag-Selektion zu bestimmen. In der zweiten datenbasierten Analyse wird eine neue Methode zur Abschätzung des W+Jets-Untergrundes in ATLAS eingeführt. Durch die Definition von signal- und untergrunddominierten Bereichen in Jet-Muliplizität und Pseudorapidität des Leptons wird der Anteil der W+Jets-Ereignisse aus der untergrunddominierten in die signaldominierte Region extrapoliert. Es wird gezeigt, dass die Methode, mit einer integrierten Luminosität von 100 pb^−1 bei sqrt(s) = 10 TeV, in der Lage ist den Untergrundbeitrag als Funktion der Jet-Muliplizität mit etwa 25% Genauigkeit im Großteil der signaldominierten Region zu bestimmen. Diese Arbeit umfaßt zudem eine Studie zum thermischen Verhalten und der erwarteten thermischen Leistung des Pixel-Detektors in ATLAS. Alle Messungen, durchgeführt während der Inbetriebnahme des Systems in 2008/09, zeigen Ergebnisse innerhalb der Spezifikationen beziehungweise deuten auf deren Einhaltung auch nach mehreren Betriebsjahren unter LHC-Bedingungen hin. / The physics of the top quark is one of the key components in the physics programme of the ATLAS experiment at the Large Hadron Collider at CERN. In this thesis, general studies of the jet trigger performance for top quark events using fully simulated Monte Carlo samples are presented and two data-driven techniques to estimate the multi-jet trigger efficiency and the W+Jets background in top pair events are introduced to the ATLAS experiment. In a tag-and-probe based method, using a simple and common event selection and a high transverse momentum lepton as tag object, the possibility to estimate the multijet trigger efficiency from data in ATLAS is investigated and it is shown that the method is capable of estimating the efficiency without introducing any significant bias by the given tag selection. In the second data-driven analysis a new method to estimate the W+Jets background in a top-pair event selection is introduced to ATLAS. By defining signal and background dominated regions by means of the jet multiplicity and the pseudo-rapidity distribution of the lepton in the event, the W+Jets contribution is extrapolated from the background dominated into the signal dominated region. The method is found to estimate the given background contribution as a function of the jet multiplicity with an accuracy of about 25% for most of the top dominated region with an integrated luminosity of above 100 pb^−1 at sqrt(s) = 10 TeV. This thesis also covers a study summarising the thermal behaviour and expected performance of the Pixel Detector of ATLAS. All measurements performed during the commissioning phase of 2008/09 yield results within the specification of the system and the performance is expected to stay within those even after several years of running under LHC conditions.
255

Extraction of the top quark mass from the total top quark pair production cross section in the single lepton channel

Ferrara, Valentina 10 April 2013 (has links)
In dieser Arbeit wird eine Messung des totalen {\ttb} Produktions-Wirkungsquerschnitts im Einzellepton-Kanal vorgestellt. Der Wirkungsquerschnitt wird mittels einer Profile-Likelihood-Anpassung von Templates bestimmt, welche \"uber einen Lieklihood-Klassifikator unter Benutzung von vier kinematischen Variablen konstruiert werden. F\"ur ein Top-Quark der Masse $m_t=172.5$ GeV ergibt sich ein gemessener Wirkungsquerschnitt von $178.9 \pm 12$ pb, welcher innerhalb einer Standardabweichung mit den aktuellsten theoretischen Vorhersagen \"uberein stimmt. Um die Massenabh\"angigkeit des experimentellen Wirkungsquerschnitts zu bestimmen, wird die Messung f\"ur sieben weitere Werte der Top-Quarkmasse im Bereich zwischen 140 GeV und 200 GeV wiederholt. Mittels Vergleich dieser Massenabh\"angigkeit mit theoretischen Vorhersagen h\"oherer Ordnung wird die Top-Quarkmasse ermittelt. Diese Methode erlaubt die Bestimmung von zwei verschiedenen theoretischen Massenparametern: der Top-Quarkmasse im On-Shell-Schema $m_t^{\mathrm{pole}}$ sowie der Masse im $\overline{MS}$-Schema $\overline{m}_t(\overline{m}_t)$. Der Messwert mit der h\"ochsten erhaltenen Genauigkeit liegt bei $m_t^{\mathrm{pole}} = 171.2\pm 4.5$ GeV, und wird durch Verwendung der derzeit pr\"azisesten Berechnungen h\"oherer Ordnung im $\overline{MS}$-Schema erhalten. Dieser Wert stimmt innerhalb einer Standardabweichung mit den derzeitig besten Mittelwert von Messungen der Top-Quarkmasse am Tevatron \"uberein. / A measurement of the total {\ttb} production cross section in the single lepton channel is presented. The cross section is extracted in a profile likelihood fit of templates constructed from a likelihood classifier using four kinematic variables. For a top quark of mass $m_t=172.5$ GeV, the measured cross section is $178.9 \pm 12$ pb. The measurement agrees within one-standard deviation with the latest theoretical predictions. The cross section measurement is repeated for seven other values of the top quark mass ranging from 140 GeV to 200 GeV to obtain the mass dependence of the experimental cross section. By comparing this with the mass dependence of different higher-order predictions, the top quark mass is extracted. This method allows the determination of two different theoretical mass parameters: the top quark mass in the on-shell scheme $m_t^{\mathrm{pole}}$ and in the $\overline{MS}$ scheme $\overline{m}_t(\overline{m}_t)$. The most precise measurement obtained is $m_t^{\mathrm{pole}} = 171.2\pm 4.5$ GeV, obtained when employing the most precise higher-order calculations in the $\overline{MS}$ scheme. This value agrees within one-standard deviation with the latest Tevatron average of the best top quark mass measurements.
256

Tamanhos pictóricos percebidos sobre gradientes de textura desenhados e fotografados após exposições breves / Pictorial size perceived under line-drawing and photographed texture gradient after brief exposures

Bernardino, Leonardo Gomes 11 September 2012 (has links)
O presente estudo teve por objetivo investigar a dinâmica temporal da percepção de tamanho de objetos inseridos em gradientes de textura, provenientes de desenho gerado computacionalmente (gradiente de perspectiva) e de fotografia com elementos naturais (gradiente fotografado). Além disso, verificou-se a ocorrência e padrão dos movimentos oculares nestas condições. Para isso, foram realizados 2 experimentos. No primeiro experimento, dois círculos negros eram apresentados no meridiano vertical e a tarefa dos 96 participantes foi indicar em qual parte do campo visual, superior ou inferior, foi apresentado o maior círculo. Usando o método das escadas duplas, estes círculos eram apresentados brevemente (50, 100, 150 ou 200 ms) em três condições de fundos de tela (sem textura, gradiente de perspectiva ou gradiente fotografado). Foram calculados a inclinação da função psicométrica e o ponto de igualdade subjetivo (PIS) como medidas da sensibilidade discriminativa e da distorção perceptiva, respectivamente. Os resultados mostraram que os participantes discriminaram melhor o tamanho dos estímulos à medida que o tempo de exposição aumentava, independentemente do gradiente de textura. A análise do PIS revelou que há uma forte distorção do tamanho aos 100 ms para o gradiente de perspectiva e aos 150 ms para o gradiente fotografado. No segundo experimento, 24 participantes realizaram uma tarefa semelhante ao do primeiro experimento e utilizou-se um eye tracker para registrar o movimento ocular. Os resultados mostraram que, em apresentações de 150 e 200 ms, os movimentos oculares ocorreram em menos de 10% das tentativas em todos os fundos de tela. Isto indica que a redução das distorções de tamanho após os 100 ms observada no primeiro experimento não é explicada completamente pela alocação da atenção. Tomados em conjunto, estes resultados sugerem que o processamento e integração das informações de tamanho e profundidade são mediados principalmente por mecanismos bottom-up. Ademais, em uma condição de menor restrição temporal (1500 ms), a presença de informações de profundidade afetou a dimensão espacial das fixações e sacadas, mas não a dimensão temporal. Os participantes olharam preferencialmente para o estímulo apresentado no campo visual superior, o que indica que a atenção foi alocada às posições de maior profundidade. Os resultados também indicaram que os estímulos maiores capturaram tanto a atenção transitória quanto a atenção sustentada. Assim, este estudo permitiu um maior conhecimento sobre os mecanismos perceptivos e atentivos envolvidos no processamento das informações de tamanho e de profundidade. / This study aimed to investigate the temporal dynamics of object size perception under texture gradients from line-drawing (perspective gradient) and photograph with natural elements (photographed gradient). Additionally, it was analyzed the occurrence and the pattern of eye movements in these conditions. For this, we carried out 2 experiments. In the first experiment, two black circles were displayed in the vertical meridian and 96 participants reported whether the bigger one was presented at the lower or upper visual field. By using double staircase psychophysics method, these circles were briefly presented (50, 100, 150 or 200 ms) in three background conditions (no texture, perspective gradient or photographed gradient). The slope of the psychometric function and the points of subjectivity equality (PSE) were calculated and used as discrimination sensitivity and size distortion measures, respectively. The results showed a greater sensitivity to size discrimination as exposure time increased, regardless the texture gradient. The analysis of PSE indicated greater size distortions in perspective gradient at 100 ms and in photographed gradient at 150 ms. In the second experiment, 24 participants performed a task similar to that in Experiment 1 while an eye tracker recorded their eye-movements. The results showed that, at 150 and 200 ms, eye movements occurred in less than 10% of trials in all backgrounds. It indicates that the size distortion reduction after 100 ms observed in the first experiment cannot be fully explained by attention allocation. Taken together, these results suggest that the process and integration of size and depth information are mainly mediated by bottom-up mechanism. Moreover, in a less time constrain condition (1500 ms), depth cues affected the spatial measures of fixations and saccades, but not the temporal ones. The participants looked more frequently at the stimulus on the upper visual field, indicating that attention has been allocated to positions of greater depth. The results also indicated that large stimulus capture both transient and sustained attention. Thus, this study provides a better understanding of perceptual and attentive mechanisms involved in size and depth process.
257

Traitement de requêtes top-k multicritères et application à la recherche par le contenu dans les bases de données multimédia / Multicriteria top-k query processing and application to content-based search in multimedia databases

Badr, Mehdi 07 October 2013 (has links)
Le développement des techniques de traitement des requêtes de classement est un axe de recherche très actif dans le domaine de la recherche d'information. Plusieurs applications nécessitent le traitement des requêtes de classement multicritères, telles que les méta-moteurs de recherche sur le web, la recherche dans les réseaux sociaux, la recherche dans les bases de documents multimédia, etc. Contrairement aux requêtes booléennes traditionnelles, dans lesquelles le filtrage est basé sur des prédicats qui retournent vrai ou faux, les requêtes de classement utilisent des prédicats de similarité retournant un score de pertinence. Ces requêtes spécifient une fonction d'agrégation qui combine les scores individuels produits par les prédicats de similarité permettant de calculer un score global pour chaque objet. Les k objets avec les meilleurs scores globaux sont retournés dans le résultat final. Dans cette thèse, nous étudions dans un premier temps les techniques et algorithmes proposés dans la littérature conçus pour le traitement des requêtes top-k multicritères dans des contextes spécifiques de type et de coût d'accès aux scores, et nous proposons un cadre générique capable d'exprimer tous ces algorithmes. Ensuite, nous proposons une nouvelle stratégie en largeur «breadth-first», qui maintient l'ensemble courant des k meilleurs objets comme un tout, à la différence des stratégies en profondeur habituelles qui se focalisent sur le meilleur candidat. Nous présentons un nouvel algorithme «Breadth-Refine» (BR), basé sur cette stratégie et adaptable à n'importe quelle configuration de type et de coût d'accès aux scores. Nous montrons expérimentalement la supériorité de l'algorithme BR sur les algorithmes existants. Dans un deuxième temps, nous proposons une adaptation des algorithmes top-k à la recherche approximative, dont l'objectif est de trouver un compromis entre le temps de recherche et la qualité du résultat retourné. Nous explorons l'approximation par arrêt prématuré de l'exécution et proposons une première étude expérimentale du potentiel d'approximation des algorithmes top-k. Dans la dernière partie de la thèse, nous nous intéressons à l'application des techniques top-k multicritères à la recherche par le contenu dans les grandes bases de données multimédia. Dans ce contexte, un objet multimédia (une image par exemple) est représenté par un ou plusieurs descripteurs, en général sous forme de vecteurs numériques qui peuvent être vus comme des points dans un espace multidimensionnel. Nous explorons la recherche des k plus proches voisins (k-ppv) dans ces espaces et proposons une nouvelle technique de recherche k-ppv approximative «Multi-criteria Search Algorithm » (MSA) basée sur les principes des algorithmes top-k. Nous comparons MSA à des méthodes de l'état de l'art dans le contexte des grandes bases multimédia où les données ainsi que les structures d'index sont stockées sur disque, et montrons qu'il produit rapidement un très bon résultat approximatif. / Efficient processing of ranking queries is an important issue in today information retrieval applications such as meta-search engines on the web, information retrieval in social networks, similarity search in multimedia databases, etc. We address the problem of top-k multi-criteria query processing, where queries are composed of a set of ranking predicates, each one expressing a measure of similarity between data objects on some specific criteria. Unlike traditional Boolean predicates returning true or false, similarity predicates return a relevance score in a given interval. The query also specifies an aggregation function that combines the scores produced by the similarity predicates. Query results are ranked following the global score and only the best k ones are returned.In this thesis, we first study the state of the art techniques and algorithms designed for top-k multi-criteria query processing in specific conditions for the type of access to the scores and cost settings, and propose a generic framework able to express any top-k algorithm. Then we propose a new breadth-first strategy that maintains the current best k objects as a whole instead of focusing only on the best one such as in all the state of the art techniques. We present Breadth-Refine (BR), a new top-k algorithm based on this strategy and able to adapt to any combination of source access types and to any cost settings. Experiments clearly indicate that BR successfully adapts to various settings, with better results than state of the art algorithms.Secondly, we propose an adaptation of top-k algorithms to approximate search aiming to a compromise between execution time and result quality. We explore approximation by early stopping of the execution and propose a first experimental study of the approximation potential of top-k algorithms. Finally, we focus on the application of multi-criteria top-k techniques to Large Scale Content-Based Image Retrieval. In this context an image is represented by one or several descriptors, usually numeric vectors that can be seen as points in a multidimensional space. We explore the k-Nearest Neighbors search on such space and propose “Multi-criteria Search Algorithm” (MSA) a new technique for approximate k-NN based on multi-criteria top-k techniques. We compare MSA with state of the art methods in the context of large multimedia databases, where the database and the index structure are stored on disk, and show that MSA quickly produces very good approximate results.
258

Integração das estratégias de sustentabilidade: \"top-down\" e \"bottomup\" como ferramentas de aprendizagem para a alfabetização ecológica no Ensino Médio / Integration of sustainability strategies: \"top-down\" and \"bottom-up\" as learning tools for ecological literacy in high school

Silva, Tainá Gouvêa Galvão 03 August 2018 (has links)
O desenvolvimento sustentável no final do século XX surgiu para expressar preocupações com graves problemas que causam riscos a vida no planeta. O uso do planeta de forma sustentável exigirá diferentes estratégias de sustentabilidade. Neste sentido têm-se as estratégias que abordam o nível mais elevado do ecossistema referidas como Top-down e as estratégias que abordam componentes locais ou regionais referidas como bottom-up. A escola traz essa compreensão nos conceitos e fenômenos ecológicos. No entanto, a compreensão dessas abordagens com apoio interdisciplinar faz com que o aluno entenda os fenômenos ecológicos de forma mais crítica. Assim, para um aprendizado motivador, o professor pode utilizar como recurso didático além dos espaços formais de educação também espaços não formais de educação, ou seja, aulas de campo em ambientes naturais. Com o intuito de promover no ensino de ecologia a alfabetização ecológica dos alunos no Ensino Médio, este trabalho elaborou uma proposta didática com apoio interdisciplinar (Biologia, Língua Portuguesa e Geografia) e com ênfase nas estratégias Top-down e bottom-up abordadas tanto em espaços formais quanto não formais de educação. O método de ensino em espaço formal de educação apesar de estar estruturado didaticamente para abranger a ecologia não se apresentou motivadora ao aluno e muitos dos conceitos e fenômenos ecológicos não foram assimilados por eles. Em contrapartida, a integração interdisciplinar em associação com as estratégias de Top-down e bottom-up em espaços não formais de educação foi motivadora para o aluno melhorando o seu desempenho com relação à assimilação dos conceitos e fenômenos biológicos promovendo a alfabetização ecológica dos alunos. Espera-se com esse trabalho ter contribuído para o ensino e aprendizagem do tema sustentabilidade através da promoção da alfabetização ecológica dos alunos de Biologia no Ensino Médio em escolas públicas com a formação de cidadãos mais conscientes e críticos em assuntos ambientais. / Sustainable development in the late 20th century has emerged to express concerns about serious life-threatening problems on the planet. The use of the planet in a sustainable way will require different sustainability strategies. In this sense we have the strategies that approach the highest level of the ecosystem referred to as \"top-down\" and the strategies that approach local or regional components referred to as \"bottom-up\". The school brings this understanding into concepts and ecological phenomena. However, understanding these approaches with interdisciplinary support makes the student understand ecological phenomena more critically. Thus, for a motivating learning, the teacher can use as a didactic resource beyond the formal spaces of education also non-formal spaces of education, that is, field lessons in natural environments. In order to promote ecological literacy among students in high school, this work elaborated a didactic proposal with interdisciplinary support (Biology, Portuguese Language and Geography) and with emphasis on top-down and bottom-up strategies addressed in both formal and non-formal education settings. The method of teaching in formal educational space despite being structured to cover ecology did not present itself as motivating to the student and many concepts and ecological phenomena were not assimilated by them. In contrast, interdisciplinary integration in association with top-down and bottom-up strategies in non-formal education spaces was motivating for the student to improve his performance in relation to the assimilation of concepts and biological phenomena by promoting literacy of the students. It is hoped that this work contributed to the teaching and learning of the sustainability theme through the promotion of the ecological literacy of Biology students in High School in public schools with the formation of citizens more conscious and critical in environmental issues.
259

Projeto de um modulador sigma-delta de baixo consumo para sinais de áudio / Low power audio sigma delta modulator design

Alarcón Cubas, Heiner Grover 23 May 2013 (has links)
Este trabalho descreve o projeto de um modulador Analógico-Digital (A/D) Sigma-Delta de 16 bits (98 dB de SNR) de baixo consumo em tecnologia CMOS para a aquisição de sinais de áudio. Para projetar o modulador foi utilizada a metodologia top down, a qual consiste em projetar desde o nível de sistema até os blocos básicos em nível de transistores. O sistema foi analizado e projetado utilizando equacões e modelos comportamentais para obter as especificações de cada bloco do modulador. Considerando um baixo consumo de potência foi escolhida a topologia CIFF (do inglês Chain of Integrator with FeedForward) de terceira ordem e quatro bits implementado com capacitores chaveados. O modulador projetado é composto por três integradores chaveados, um somador analógico, um weigthed DAC e um quantizador de quatro bits. A técnica de Chopper é incluida no modulador para diminuir o ruído Flicker na entrada do modulador. Os blocos de maior consumo dentro do modulador são as OTAs. Por esta razão eles são projetados utilizando a metodologia gm/ID reduzindo assim o consumo de potência. O projeto foi realizado na tecnologia IBM 0,18 \'mü\'m sendo utilizado o simulador spectre do Cadence. O modulador Sigma-Delta atinge um SNR de 98 dB para uma banda de 20 kHz e um consumo de potência de 2,4 mW para uma fonte de alimentação de 1,8 V. / This work describes the design of a 16 bits low power Sigma-Delta modulator (98 dB SNR) in a CMOS technology for the acquisition of audio signals. To design the modulator it was used the top-down methodology, which consists on the design from system level to the transistor-level basic blocks. The system was analyzed and designed using behavioral models and equations to obtain the specifications of each block of the modulator. Considering a low power consumption it was chosen a third-order four bits CIFF topology (Chain Integrator with feedforward) implemented with switched capacitors. The modulator is composed by three integrators, one analog adder, one weigthed DAC and one four bit quantizer. The Chopper technique is included in the modulator to reduce the Flicker noise at the input of the modulator. The blocks of higher consumption within the modulator are the OTAs. Hence, they was designed using the methodology gm/ID to reduce power consumption. It was designed on the 0.18 \'mü\'m IBM technology and using the Cadence Spectre simulator. The Sigma-Delta modulator achieves a SNR of 98 dB for a bandwidth of 20 kHz and a power consumption of 2.4 mW with a 1.8 V power supply.
260

Sistemas computacionais para atenção visual Top-Down e Bottom-up usando redes neurais artificiais / Computational systems for top-down and bottom-uo visual attention using artificial neural networks

Alcides Xavier Benicasa 18 November 2013 (has links)
A análise de cenas complexas por computadores não é uma tarefa trivial, entretanto, o cérebro humano pode realizar esta função de maneira eficiente. A evolução natural tem desenvolvido formas para otimizar nosso sistema visual de modo que apenas partes importantes da cena sejam analisadas a cada instante. Este mecanismo de seleção é denominado por atenção visual. A atenção visual opera sob dois aspectos: bottom-up e top-down. A atenção bottom-up é dirigida por conspicuidades baseadas na cena, como o contraste de cores, orientação, etc. Por outro lado, a atenção top-down é controlada por tarefas, memórias, etc. A atenção top-down pode ainda modular o mecanismo bottom-up através do enviesamento de determinadas características de acordo com a tarefa. Além do mecanismo de modulação considerado, o que é selecionado a partir da cena também representa uma importante parte para o processo de seleção. Neste cenário, diversas teorias têm sido propostas e podem ser agrupadas em duas linhas principais: atenção baseada no espaço e atenção baseada em objetos. Modelos baseados em objeto, ao invés de apenas direcionar a atenção para locais ou características específicas da cena, requerem que a seleção seja realizada a nível de objeto, significando que os objetos são a unidade básica da percepção. De modo a desenvolver modelos de acordo com a teoria baseada em objetos, deve-se considerar a integração de um módulo de organização perceptual. Este módulo pode segmentar os objetos do fundo da cena baseado em princípios de agrupamento tais como similaridade, proximidade, etc. Esses objetos competirão pela atenção. Diversos modelos de atenção visual baseados em objetos tem sido propostos nos últimos anos. Pesquisas em modelos de atenção visual têm sido desenvolvidas principalmente relacionadas à atenção bottom-up guiadas por características visuais primitivas, desconsiderando qualquer informação sobre os objetos. Por outro lado, trabalhos recentes têm sido realizados em relação ao uso do conhecimento sobre o alvo para influenciar a seleção da região mais saliente. Pesquisas nesta área são relativamente novas e os poucos modelos existentes encontram-se em suas fases iniciais. Aqui, nós propomos um novo modelo para atenção visual com modulações bottom-up e top-down. Comparações qualitativas e quantitativas do modelo proposto são realizadas em relação aos mapas de fixação humana e demais modelos estado da arte propostos / Perceiving a complex scene is a quite demanding task for a computer albeit our brain does it efficiently. Evolution has developed ways to optimize our visual system in such a manner that only important parts of the scene undergo scrutiny at a given time. This selection mechanism is named visual attention. Visual attention operates in two modes: bottom-up and top-down. Bottom-up attention is driven by scene-based conspicuities, such as the contrast of colors, orientation, etc. On the other hand, top-down attention is controlled by task, memory, etc. Top-down attention can even modulate the bottom-up mechanism biasing features according to the task. In additional to modulation mechanism taken into account, what is selected from the scene also represents an important part of the selection process. In this scenario, several theories have been proposed and can be gathered in two main lines: space-based attention and object-based attention. Object-based models, instead of only delivering the attention to locations or specific features of the scene, claim that the selection it be performed on object level, it means that the objects are the basic unit of perception. In order to develop models following object-based theories, one needs to consider the integration of a perceptual organization module. This module might segment the objects from the background of the scene based on grouping principles, such as similarity, closeness, etc. Those objects will compete for attention. Several object-based models of visual attention have been proposed in recent years. Research in models of visual attention has mainly focused on the bottom-up guidance of early visual features, disregarding any information about objects. On the other hand, recently works have been conducted regarding the use of the knowledge of the target to influence the computation of the most salient region. The research in this area is rather new and the few existing models are in their early phases. Here, we propose a new visual attention model with both bottom-up and top-down modulations. We provide both qualitative and quantitative comparisons of the proposed model against an ground truth fixation maps and state-of-the-art proposed methods

Page generated in 0.0623 seconds