• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 332
  • 89
  • 39
  • 33
  • 31
  • 12
  • 8
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 653
  • 114
  • 97
  • 68
  • 61
  • 61
  • 58
  • 58
  • 53
  • 53
  • 50
  • 46
  • 43
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Processos de composição microtonal por meio do modelo de dissonancia sensorial / Microtonal compositional processes via the sensory dissonance model

Porres, Alexandre Torres 14 December 2007 (has links)
Orientador: Jonatas Manzolli / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Artes / Made available in DSpace on 2018-08-11T01:31:44Z (GMT). No. of bitstreams: 1 Porres_AlexandreTorres_M.pdf: 11455716 bytes, checksum: ea6618983f2e1d8ab253db22a60f9f9f (MD5) Previous issue date: 2007 / Resumo: Foram estudados, na presente pesquisa, Sistemas de Afinação com o intuito de investigar a aplicação desse conhecimento em Processos Criativos. Para tal, desenvolvemos uma distinção de abordagens composicionais na música do Século XX, e também estudamos o mecanismo de construção de escalas adotando, principalmente, o estudo da Psicoacústica (i.e. Modelo de Dissonância Sensorial), que propiciou o desenvolvimento de uma ferramenta computacional tanto de análise de sons e Sistemas de Afinação, quanto de criação sonora. Os estudos serviram de base para o desenvolvimento de Processos Criativos com comentários de acordo com os objetivos de pesquisa / Abstract: On the present research, Tuning Systems were studied in order to investigate its application in Creative Processes. For such, we created a distinction of compositional approaches in the music of the 20th Century, and we also studied mechanisms of scale development by mainly adopting Psychoacoustics (i.e. Roughness Model), which enabled the development of a computational tool for the analysis of sounds and Tuning Systems, as well as musical creation. The studies have been applied in Creative Processes with comments and discussions over the research¿s objectives / Mestrado / Mestre em Música
282

[en] AUTONOMIC INDEX CREATION IN DATABASES / [pt] CRIAÇÃO AUTÔNOMA DE ÍNDICES EM BANCOS DE DADOS

MARCOS ANTONIO VAZ SALLES 20 December 2004 (has links)
[pt] A escolha e materialização de índices são atividades comumente realizadas por administradores de bancos de dados (DBAs) para acelerar o processamento de aplicações de bancos de dados. Devido à complexidade da tarefa de seleção de índices e à pressão por maior produtividade sobre os profissionais que realizam sintonia, diversos trabalhos na literatura e em sistemas comerciais procuram obter ferramentas que possam apoiar o DBA na escolha dos melhores índices para uma dada carga de trabalho. Classificamos estes trabalhos como sendo de auto-sintonia local, uma vez que se focam em um problema de sintonia específico, em oposição a trabalhos de auto-sintonia global, que almejam obter um desempenho aceitável para o sistema como um todo. Esta dissertação propõe duas arquiteturas que permitem automatizar completamente a sintonia de índices. A indepedência de intervenção humana é obtida através do uso de agentes de software. A combinação de agentes com SGBDs torna os sistemas mais autônomos e capazes de auto-sintonia. Implementamos uma das arquiteturas propostas no SGBD de código fonte aberto PostgreSQL e obtivemos resultados experimentais com uma carga transacional que mostram a viabilidade de nossa abordagem. / [en] The choice and materialization of indexes are activities commonly done by database administrators to speed up database application processing. Due to the complexity of the index selection task and to the pressure for productivity increase put on tuning professionals, many works on the literature and on commercial systems seek for tools that can help the DBA choose the best indexes for a given workload. We classify these works as local self- tuning, once they are interested in a specific tuning problem, in opposition to global self-tuning work, which is targeted at obtaining acceptable performance for the system as a whole. This dissertation proposes two architectures that allow the complete automation of the index tuning task. Human intervention independence is achieved through the use of software agents. The combination of agents and DBMS makes systems more autonomous and self-tuning. We have implemented one of the proposed architectures in the open source DBMS PostgreSQL and obtained experimental results with a transactional workload that show the feasibility of our approach.
283

Automatic fine tuning of cavity filters / Automatisk finjustering av kavitetsfilter

Boyer de la Giroday, Anna January 2016 (has links)
Cavity filters are a necessary component in base stations used for telecommunication. Without these filters it would not be possible for base stations to send and receive signals at the same time. Today these cavity filters require fine tuning by humans before they can be deployed. This thesis have designed and implemented a neural network that can tune cavity filters. Different types of design parameters have been evaluated, such as neural network architecture, data presentation and data preprocessing. While the results was not comparable to human fine tuning, it was shown that there was a relationship between error and number of weights in the neural network. The thesis also presents some rules of thumb for future designs of neural network used for filter tuning.
284

Testing and Tuning of Optimization Algorithms : On the implementation of Radiotherapy

Söderström, Ola January 2015 (has links)
When treating cancer patients using radiotherapy, careful planning is essential to ensure that the tumour region is treated while surrounding healthy tissue is not injured in the process. The radiation dose in the tumour along with the dose limitations to healthy tissue can be expressed as a constrained optimization problem. The goal of this project has been to create prototype environments in C++ for both testing and parameter tuning of optimization algorithms intended to solve radiotherapy problems. A library of test problems has been implemented on which the optimization algorithms can be tested. For the sake of simplicity, the problem solving and parameter tuning has only been carried out with the interior point solver IPOPT. The results of a parameter tuning process are displayed in tables where the effect of the tuning can be analysed. By using the implemented parameter tuning process, some settings have been found that are better than the default values when solving the implemented test problems.
285

Association Rules in Parameter Tuning : for Experimental Designs

Hållén, Henrik January 2014 (has links)
The objective of this thesis was to investigate the possibility ofusing association rule algorithms to automatically generaterules for the output of a Parameter Tuning framework. Therules would be the basis for a recommendation to the user regardingwhich parameter space to reduce during experimentation.The parameter tuning output was generated by means ofan open source project (INPUT) example program. InPUT is atool used to describe computer experiment configurations in aframework independent input/output format. InPUT has adaptersfor the evolutionary algorithm framework Watchmakerand the tuning framework SPOT. The output was imported in Rand preprocessed to a format suitable for association rule algorithms.Experiments were conducted on data for which theparameter spaces were discretized in 2, 5, 10 steps. The minimumsupport threshold was set to 1% and 3% to investigatethe amount of rules over time. The Apriori and Eclat algorithmsproduced exactly the same amount of rules, and the top 5rules with regards to support were basically the same for bothalgorithms. It was not possible at the time to automatically distinguishinguseful rules. In combination with the many manualdecisions during the process of converting the tuning output toassociation rules, the conclusion was reached to not recommendassociation rules for enhancing the Parameter Tuningprocess.
286

Výběr modelu na základě penalizované věrohodnosti / Variable selection based on penalized likelihood

Chlubnová, Tereza January 2016 (has links)
Selection of variables and estimation of regression coefficients in datasets with the number of variables exceeding the number of observations consti- tutes an often discussed topic in modern statistics. Today the maximum penalized likelihood method with an appropriately selected function of the parameter as the penalty is used for solving this problem. The penalty should evaluate the benefit of the variable and possibly mitigate or nullify the re- spective regression coefficient. The SCAD and LASSO penalty functions are popular for their ability to choose appropriate regressors and at the same time estimate the parameters in a model. This thesis presents an overview of up to date results in the area of characteristics of estimates obtained by using these two methods for both small number of regressors and multidimensional datasets in a normal linear model. Due to the fact that the amount of pe- nalty and therefore also the choice of the model is heavily influenced by the tuning parameter, this thesis further discusses its selection. The behavior of the LASSO and SCAD penalty functions for different values and possibili- ties for selection of the tuning parameter is tested with various numbers of regressors on simulated datasets.
287

Investigating Pattern Recognition And Bi-coordinate Sound Localization in the Tree Cricket Species Oecanthus Henryi

Bhattacharya, Monisha January 2016 (has links) (PDF)
Acoustic communication, used by a wide variety of animals, consists of the signaler, the signal and the receiver. A change in the behaviour of the receiver after reception of the signal is a prerequisite for communication. A response to the signal by the receiver depends on signal recognition and localization of the signal source. These two aspects, namely recognition and localization by the receiver, form the main body of my work. In the mating system of crickets, the males produce advertisement calls to attract silent females to mate. Females need to recognize the conspecific call and localize the male. The tree cricket Oecanthus henryi, due to aspects of its physiology and the environment it inhabits, generates interesting problems concerning these seemingly simple tasks of recognition and localization. In crickets, usually a species-specific sender-receiver match for the call features exists, which aids in recognition. A change in the call carrier frequency with temperature, due to poikilothermy, as seen in O. henryi, may pose a problem for this sender-receiver match. To circumvent this, either the response should shift concomitantly with the change in the feature (narrow tuning) or the response should encompass the entire variation of the feature (broad tuning). I explored the response of O. henryi females to the changing nature of call carrier frequency with temperature. The results showed that O. henryi females are broadly tuned to call carrier frequency. Being broadly tuned I next wanted to explore if within the natural variation in carrier frequency, the females were able to discriminate between frequencies. Females were found not to discriminate between frequencies. Cricket ears being pressure difference receivers are inherently directional, however their directionality is dependent on frequency, which may be affected by the change in carrier frequency due to temperature. Thus I also tested the effect of frequency on the azimuthal localization accuracy. The azimuthal accuracy was not affected by call carrier frequency within the natural range of frequency variability of the species. In south India, O. henryi is found in sympatry with Oecanthus indicus. Reproductive isolation between the two is maintained through calls. Since O. henryi is broadly tuned to frequency, call carrier frequency is unlikely to enable differentiation between conspecific and heterospecific calls. I thus tested whether the temporal features can account for the same. I constructed a quantitative multivariate model of response space of O. henryi incorporating results from various playback experiments. The model predicted high responses for conspecific calls and low responses for heterospecific calls, indicating that temporal features could suffice to discriminate between the two species. The quantitative model could also be used more generally to check responses to other heterospecifics and to compare responses between conspecifics from different populations. O. henryi is found on a bush and thus the female has to navigate in a 3D environment to localize the singing male. Very few studies have explored 3D localization in insects and moreover an algorithm explaining the procedure is missing. I attempted to model the 3D localization capability in O. henryi. To understand the rules behind the localization animals were observed in the wild as well as on a 3D grid in the laboratory and simulations were created to capture the nature of the phonotaxis. Neither a random model nor a deterministic model (which estimated the shortest path) could predict the paths observed in the grid. A less complex Bayesian stochastic model performed better than a more complex one. From the assumptions of the model it was inferred that the animal, for 3D localization, basically performs localization in the azimuthal plane and combines certain simple rules to go up or down. This study has examined receiver tuning in response to change in carrier frequency with temperature, which to my knowledge had not been explored before for insects. In this study I also attempted to create a quantitative multivariate receiver response space through statistical modeling, a method that can be applied in similar studies across taxa in various acoustic communication systems. A detailed Bayesian algorithm to explain 3D localization for an insect was attempted which has also not been attempted before.
288

TUNING OPTIMIZATION SOFTWARE PARAMETERS FOR MIXED INTEGER PROGRAMMING PROBLEMS

Sorrell, Toni P 01 January 2017 (has links)
The tuning of optimization software is of key interest to researchers solving mixed integer programming (MIP) problems. The efficiency of the optimization software can be greatly impacted by the solver’s parameter settings and the structure of the MIP. A designed experiment approach is used to fit a statistical model that would suggest settings of the parameters that provided the largest reduction in the primal integral metric. Tuning exemplars of six and 59 factors (parameters) of optimization software, experimentation takes place on three classes of MIPs: survivable fixed telecommunication network design, a formulation of the support vector machine with the ramp loss and L1-norm regularization, and node packing for coding theory graphs. This research presents and demonstrates a framework for tuning a portfolio of MIP instances to not only obtain good parameter settings used for future instances of the same class of MIPs, but to also gain insights into which parameters and interactions of parameters are significant for that class of MIPs. The framework is used for benchmarking of solvers with tuned parameters on a portfolio of instances. A group screening method provides a way to reduce the number of factors in a design and reduces the time it takes to perform the tuning process. Portfolio benchmarking provides performance information of optimization solvers on a class with instances of a similar structure.
289

Estimating the Local False Discovery Rate via a Bootstrap Solution to the Reference Class Problem: Application to Genetic Association Data

Abbas Aghababazadeh, Farnoosh January 2015 (has links)
Modern scientific technology such as microarrays, imaging devices, genome-wide association studies or social science surveys provide statisticians with hundreds or even thousands of tests to consider simultaneously. Testing many thousands of null hypotheses may increase the number of Type $I$ errors. In large-scale hypothesis testing, researchers can use different statistical techniques such as family-wise error rates, false discovery rates, permutation methods, local false discovery rate, where all available data usually should be analyzed together. In applications, the thousands of tests are related by a scientifically meaningful structure. Ignoring that structure can be misleading as it may increase the number of false positives and false negatives. As an example, in genome-wide association studies each test corresponds to a specific genetic marker. In such a case, the scientific structure for each genetic marker can be its minor allele frequency. In this research, the local false discovery rate as a relevant statistical approach is considered to analyze the thousands of tests together. We present a model for multiple hypothesis testing when the scientific structure of each test is incorporated as a co-variate. The purpose of this model is to incorporate the co-variate to improve the performance of testing procedures. The method we consider has different estimates depending on the tuning parameter. We would like to estimate the optimal value of that parameter by considering observed statistics. Thus, among those estimators, the one which minimizes the estimated errors due to bias and to variance is chosen by applying the bootstrap approach. Such an estimation method is called an adaptive reference class method. Under the combined reference class method, the effect of the co-variates is ignored and all null hypotheses should be analyzed together. In this research, under some assumptions for the co-variates and the prior probabilities, the proposed adaptive reference class method shows smaller error than the combined reference class method in estimating the local false discovery rate, when the number of tests gets large. We describe the adaptive reference class method to the coronary artery disease data, and we use simulation data to evaluate the performance of the estimator associated with the adaptive reference class method.
290

Nouveaux algorithmes numériques pour l'utilisation efficace des architectures de calcul multi-coeurs et hétérogènes / New algorithms for Efficient use of multi-cores and heterogeneous architectures

Boillod-Cerneux, France 13 October 2014 (has links)
Depuis la naissance des supercalculateurs jusqu'à l'arrivée de machines Petaflopiques, les technologies qui les entourent n'ont cessé d'évoluer à une vitesse fulgurante. Cette course à la performance se heurte aujourd'hui au passage à l'Exascale, qui se démarque des autres échelles par les difficultés qu'elle impose: Les conséquences qui en découlent bouleversent tous les domaines scientifiques relatifs au Calcul Haute Performance (HPC). Nous nous plaçons dans le contexte des problèmes à valeurs propres, largement répandus: du page ranking aux simulation nucléaires, astronomie, explorations pétrolifères...Notre démarche comporte deux thématiques complémentaires: Nous proposons d'étudier puis d'améliorer la convergence de la méthode Explicitely Restarted Arnoldi Method (ERAM) en réutilisant les informations générées. L'étude de la convergence et sa caractérisation sont indispensable pour pouvoir mettre en place des techniques de Smart-Tuning. La phase d'amélioration consiste à utiliser les valeurs de Ritz de manière efficace afin d'accélérer la convergence de la méthode sans couts supplémentaires en termes de communications parallèles ou de stockage mémoire, paramètres indispensables pour les machines multi-coeurs et hétérogènes. Enfin, nous proposons deux méthodes pour générer des matrices de très larges dimensions aux spectres imposés afin de constituer une collection de matrices de tests qui seront partagées avec la communauté du HPC. Ces matrices serviront à valider numériquement des solveurs de systèmes à valeurs propres d'une part, et d'autre part de pouvoir évaluer leur performances parallèles grâce à leur propriétés adaptées aux machines petaflopiques et au-delà. / The supercomputers architectures and programming paradigms have dramatically evolve during the last decades. Since we have reached the Petaflopic scale, we forecast to overcome the Exaflopic scale. Crossing this new scale implies many drastic changes, concerning the overall High Performance Computing scientific fields. In this Thesis, we focus on the eigenvalue problems, implied in most of the industrial simulations. We first propose to study and caracterize the Explicitly Restarted Arnoldi Method convergence. Based on this algorithm, we re-use efficiently the computed Ritz-Eigenvalues to accelerate the ERAM convergence onto the desired eigensubspace. We then propose two matrix generators, starting from a user-imposed spectrum. Such matrix collections are used to numerically check and approve extrem-scale eigensolvers, as well as measure and improve their parallel performance on ultra-scale supercomputers.

Page generated in 0.0881 seconds