• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 7
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Hedmans Kvadratrotsalgoritm / Hedman´s square root algorithm

Hedman, Anders January 2001 (has links)
<p>I detta 10-poängsarbete går jag igenom hur min egenhändigt producerade kvadratrotsalgoritm fungerar praktiskt och teoretiskt. Med denna algoritm kan man för hand räkna ut kvadratrötter som innehåller 50-60 värdesiffror. Med de tidigare kända algoritmerna för kvadratrötter kan man räkna ut 5-6 värdesiffror. </p><p>Min algoritm fungerar inte på samma sätt som de tidigare använda kvadratrotsalgoritmerna men den är lika korrekt. Stor tyngdvikt i arbetet har därför lagts på att visa på att det finns flera olika korrekta algoritmer för våra vanliga räknesätt. </p><p>Arbetet innehåller också en kort skildring av den pågående debatten huruvida algoritmräkning i grundskolan hämmar elevernas matematiska tänkande eller inte.</p>
12

Hedmans Kvadratrotsalgoritm / Hedman´s square root algorithm

Hedman, Anders January 2001 (has links)
I detta 10-poängsarbete går jag igenom hur min egenhändigt producerade kvadratrotsalgoritm fungerar praktiskt och teoretiskt. Med denna algoritm kan man för hand räkna ut kvadratrötter som innehåller 50-60 värdesiffror. Med de tidigare kända algoritmerna för kvadratrötter kan man räkna ut 5-6 värdesiffror. Min algoritm fungerar inte på samma sätt som de tidigare använda kvadratrotsalgoritmerna men den är lika korrekt. Stor tyngdvikt i arbetet har därför lagts på att visa på att det finns flera olika korrekta algoritmer för våra vanliga räknesätt. Arbetet innehåller också en kort skildring av den pågående debatten huruvida algoritmräkning i grundskolan hämmar elevernas matematiska tänkande eller inte.
13

A questão da legitimidade no Parlasul : uma abordagem da representação cidadã utilizando o índice de Banzhaf e a penrose square root Law

Hipólito Abílio Ramos, Mariana 31 January 2012 (has links)
Made available in DSpace on 2014-06-12T15:53:50Z (GMT). No. of bitstreams: 2 arquivo9628_1.pdf: 2650891 bytes, checksum: a12716c1dde6e8c92ab84523b60429e1 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2012 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / O principal objetivo do presente estudo foi avaliar se o sistema de ponderação de votos, a chamada representação cidadã , a ser adotado pelo Parlamento do Mercosul é legítimo no sentido de conferir a devida representatividade aos Estados-membros do bloco. Para tal, inicialmente foi feita uma discussão sobre legitimidade por meio de autores como Weber, Kelsen, Bobbio e Habermas. Considerando-se o conceito habermasiano o mais apropriado, abordou-se o tema de pesquisa, em seguida, à luz da Política Comparada, tendo como marco comparativo o Parlamento Europeu, e da Teoria dos Jogos por meio do índice de Banzhaf, que busca medir precisamente o poder de cada jogador como sua capacidade de influenciar nas decisões. Ainda, buscou-se averiguar se, uma vez iniciado o sistema de ponderação de votos, os cidadãos dos diferentes países que compõem o Mercosul terão a mesma influência sobre a decisão tomada. Foi considerada, assim, a proposta de Lionel Penrose, a Square Root Law, que estabelece que a influência de cada cidadão de um país sobre o resultado da eleição será a mesma caso o poder de cada país seja aproximadamente proporcional à raiz quadrada do número de cidadãos do mesmo. Desta forma, foi calculado o índice de Banzhaf por meio do algoritmo de &#379;yczkowski & S&#322;omczy&#324;ski (2004) para seis cenários: i) sistema de votos paritário; ii) 1a etapa da representação cidadã; iii) 2a etapa da representação cidadã; iv) 2a etapa da representação cidadã considerando hipotética adesão da Venezuela ao Mercosul; v) sistema proposto por Penrose; sistema proposto por Penrose considerando adesão da Venezuela. Por fim, foi feita a análise de resultados. Dentre os principais resultados, nota-se que o sistema de ponderação de votos é mais equitativo no que se refere à quantidade de habitantes representados por parlamentar e que, no geral, os cidadãos dos países mais populosos são prejudicados nos seis cenários quanto ao poder de influenciar uma decisão. Constatou-se que a representação cidadã em suas duas etapas, apesar dos pesos diferentes para o Brasil e Argentina, não modifica a estrutura de coalizões, de forma que em ambas cada um dos Estados Membros terá a mesma probabilidade de influenciar nas decisões. O mesmo não ocorre ao se considerar a adesão da Venezuela, que leva à redução do índice de Banzhaf de todos os países, exceto o Brasil, que é extremamente beneficiado, passando a ter mais peso nas decisões do parlamento. Foi averiguado que o sistema de ponderação de votos torna o processo mais legítimo quanto à representatividade dos membros mercosulinos, mesmo que longe do ideal. O problema se dá, sobretudo, devido à discrepância populacional interna do bloco
14

Questão da legitimidade no Parlasul : uma abordagem da representação cidadã utilizando o índice de Banzhaf e a penrose square root Law

Ramos, Mariana Hipólito A 31 January 2012 (has links)
Submitted by Marcelo Andrade Silva (marcelo.andradesilva@ufpe.br) on 2015-03-04T17:54:40Z No. of bitstreams: 2 Dissertacao.pdf: 3000935 bytes, checksum: 46c27e3b483e261828f0a0119fdd372f (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-04T17:54:40Z (GMT). No. of bitstreams: 2 Dissertacao.pdf: 3000935 bytes, checksum: 46c27e3b483e261828f0a0119fdd372f (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2012 / O principal objetivo do presente estudo foi avaliar se o sistema de ponderação de votos, a chamada “representação cidadã”, a ser adotado pelo Parlamento do Mercosul é legítimo no sentido de conferir a devida representatividade aos Estados-membros do bloco. Para tal, inicialmente foi feita uma discussão sobre legitimidade por meio de autores como Weber, Kelsen, Bobbio e Habermas. Considerando-se o conceito habermasiano o mais apropriado, abordou-se o tema de pesquisa, em seguida, à luz da Política Comparada, tendo como marco comparativo o Parlamento Europeu, e da Teoria dos Jogos por meio do índice de Banzhaf, que busca medir precisamente o poder de cada jogador como sua capacidade de influenciar nas decisões. Ainda, buscou-se averiguar se, uma vez iniciado o sistema de ponderação de votos, os cidadãos dos diferentes países que compõem o Mercosul terão a mesma influência sobre a decisão tomada. Foi considerada, assim, a proposta de Lionel Penrose, a Square Root Law, que estabelece que a influência de cada cidadão de um país sobre o resultado da eleição será a mesma caso o poder de cada país seja aproximadamente proporcional à raiz quadrada do número de cidadãos do mesmo. Desta forma, foi calculado o índice de Banzhaf por meio do algoritmo de Życzkowski & Słomczyński (2004) para seis cenários: i) sistema de votos paritário; ii) 1a etapa da representação cidadã; iii) 2a etapa da representação cidadã; iv) 2a etapa da representação cidadã considerando hipotética adesão da Venezuela ao Mercosul; v) sistema proposto por Penrose; sistema proposto por Penrose considerando adesão da Venezuela. Por fim, foi feita a análise de resultados. Dentre os principais resultados, nota-se que o sistema de ponderação de votos é mais equitativo no que se refere à quantidade de habitantes representados por parlamentar e que, no geral, os cidadãos dos países mais populosos são prejudicados nos seis cenários quanto ao poder de influenciar uma decisão. Constatou-se que a representação cidadã em suas duas etapas, apesar dos pesos diferentes para o Brasil e Argentina, não modifica a estrutura de coalizões, de forma que em ambas cada um dos Estados Membros terá a mesma probabilidade de influenciar nas decisões. O mesmo não ocorre ao se considerar a adesão da Venezuela, que leva à redução do índice de Banzhaf de todos os países, exceto o Brasil, que é extremamente beneficiado, passando a ter mais peso nas decisões do parlamento. Foi averiguado que o sistema de ponderação de votos torna o processo mais legítimo quanto à representatividade dos membros mercosulinos, mesmo que longe do ideal. O problema se dá, sobretudo, devido à discrepância populacional interna do bloco.
15

Pipelining Of Double Precision Floating Point Division And Square Root Operations On Field-programmable Gate Arrays

Thakkar, Anuja 01 January 2006 (has links)
Many space applications, such as vision-based systems, synthetic aperture radar, and radar altimetry rely increasingly on high data rate DSP algorithms. These algorithms use double precision floating point arithmetic operations. While most DSP applications can be executed on DSP processors, the DSP numerical requirements of these new space applications surpass by far the numerical capabilities of many current DSP processors. Since the tradition in DSP processing has been to use fixed point number representation, only recently have DSP processors begun to incorporate floating point arithmetic units, even though most of these units handle only single precision floating point addition/subtraction, multiplication, and occasionally division. While DSP processors are slowly evolving to meet the numerical requirements of newer space applications, FPGA densities have rapidly increased to parallel and surpass even the gate densities of many DSP processors and commodity CPUs. This makes them attractive platforms to implement compute-intensive DSP computations. Even in the presence of this clear advantage on the side of FPGAs, few attempts have been made to examine how wide precision floating point arithmetic, particularly division and square root operations, can perform on FPGAs to support these compute-intensive DSP applications. In this context, this thesis presents the sequential and pipelined designs of IEEE-754 compliant double floating point division and square root operations based on low radix digit recurrence algorithms. FPGA implementations of these algorithms have the advantage of being easily testable. In particular, the pipelined designs are synthesized based on careful partial and full unrolling of the iterations in the digit recurrence algorithms. In the overall, the implementations of the sequential and pipelined designs are common-denominator implementations which do not use any performance-enhancing embedded components such as multipliers and block memory. As these implementations exploit exclusively the fine-grain reconfigurable resources of Virtex FPGAs, they are easily portable to other FPGAs with similar reconfigurable fabrics without any major modifications. The pipelined designs of these two operations are evaluated in terms of area, throughput, and dynamic power consumption as a function of pipeline depth. Pipelining experiments reveal that the area overhead tends to remain constant regardless of the degree of pipelining to which the design is submitted, while the throughput increases with pipeline depth. In addition, these experiments reveal that pipelining reduces power considerably in shallow pipelines. Pipelining further these designs does not necessarily lead to significant power reduction. By partitioning these designs into deeper pipelines, these designs can reach throughputs close to the 100 MFLOPS mark by consuming a modest 1% to 8% of the reconfigurable fabric within a Virtex-II XC2VX000 (e.g., XC2V1000 or XC2V6000) FPGA.
16

Forecasting the term structure of volatility of crude oil price changes

Balaban, E., Lu, Shan 2016 February 1922 (has links)
Yes / This is a pioneering effort to test the comparative performance of two competing models for out-of-sample forecasting the term structure of volatility of crude oil price changes employing both symmetric and asymmetric evaluation criteria. Under symmetric error statistics, our empirical model using the estimated growth factor of volatility through time is overall superior, and it beats in most cases the benchmark model of the square-root-of-time for holding periods between one and 250 days. Under asymmetric error statistics, if over-prediction (under-prediction) of volatility is undesirable, the empirical (benchmark) model is consistently superior. Relative performance of the empirical model is much higher for holding periods up to fifty days.
17

Real Time 3d Surface Feature Extraction On Fpga

Tellioglu, Zafer Hasim 01 July 2010 (has links) (PDF)
Three dimensional (3D) surface feature extractions based on mean (H) and Gaussian (K) curvature analysis of range maps, also known as depth maps, is an important tool for machine vision applications such as object detection, registration and recognition. Mean and Gaussian curvature calculation algorithms have already been implemented and examined as software. In this thesis, hardware based digital curvature processors are designed. Two types of real time surface feature extraction and classification hardware are developed which perform mean and Gaussian curvature analysis at different scale levels. The techniques use different gradient approximations. A fast square root algorithm using both LUT (look up table) and linear fitting technique is developed to calculate H and K values of the surface described by the 3D Range Map formed by fixed point numbers. The proposed methods are simulated in MatLab software and implemented on different FPGAs using VHDL hardware language. Calculation times, outputs and power analysis of these techniques are compared to CPU based 64 bit float data type calculations.
18

Ανάπτυξη δομών φίλτρων χαμηλής τάσης τροφοδοσίας στο πεδίο της τετραγωνικής ρίζας

Στούμπου, Ελένη 14 January 2009 (has links)
Αντικείμενο της παρούσας Ειδικής Επιστημονικής Εργασίας είναι η ανάπτυξη φίλτρων στο πεδίο της τετραγωνικής ρίζας με τη μέθοδο του γραμμικού μετασχηματισμού (Linear Transformation). Ως παράδειγμα, δίνεται η σχεδίαση, η εξομοίωση και τέλος η φυσική σχεδίαση ενός ελλειπτικού βαθυπερατού φίλτρου 3ης τάξης στο πεδίο της τετραγωνικής ρίζας (Square-Root Domain). Για λόγους σύγκρισης, η σχεδίαση του φίλτρου γίνεται με τέσσερις διαφορετικές μεθόδους εξομοίωσης παθητικών φίλτρων (Leapfrog, Topologic, Wave και Linear Trasformation method) και η ανάλυση κάθε μεθόδου παρουσιάζεται σε αντίστοιχο κεφάλαιο. / The subject of this master thesis is the design of analog filters in square root domain utilizing the method of Linear Transformation. As a design example a third order elliptic lowpass filter transfer function will be realized. For comparison results we are using four different design methods (Leapfrog, Topologic, Wave and Linear Trasformation)in order to realize such filter. Each synthesis method is demonstrated in different chapter.
19

Laboratorní přípravek s analogovou výpočetní jednotkou AD538 / Laboratory device with analog computational unit AD538

Hruboš, Zdeněk January 2009 (has links)
An analog multiplier are circuits, that are realized multiplication of two analog signals. They can be made by suitable connection (according to function) discreet parts. Nowadays there are integrated circuits in advance function, whose internal structure is made by complex of circuits with operating ampfliers and other circuits. These circuits have a very high accuracy of arithmetic operation which is mostly better than 1%. The analog multiplier are used in situation, when we need realize multiplication, division, exponentation, square root extraction and logarithmic calculation of analog signals. In the next are used in circuits for multiplication of frequency, shifting of frequency, amplitude modulation, detection phase angel of two signals with the same frequency, etc. The AD538 is a monolithic real-time computational circuit that provides precision analog multiplication, division and exponentiation. The combination of low input and output offset voltages and excellent linearity results in accurate computation over an unusually wide input dynamic range.
20

Efficient formulation and implementation of ensemble based methods in data assimilation

Nino Ruiz, Elias David 11 January 2016 (has links)
Ensemble-based methods have gained widespread popularity in the field of data assimilation. An ensemble of model realizations encapsulates information about the error correlations driven by the physics and the dynamics of the numerical model. This information can be used to obtain improved estimates of the state of non-linear dynamical systems such as the atmosphere and/or the ocean. This work develops efficient ensemble-based methods for data assimilation. A major bottleneck in ensemble Kalman filter (EnKF) implementations is the solution of a linear system at each analysis step. To alleviate it an EnKF implementation based on an iterative Sherman Morrison formula is proposed. The rank deficiency of the ensemble covariance matrix is exploited in order to efficiently compute the analysis increments during the assimilation process. The computational effort of the proposed method is comparable to those of the best EnKF implementations found in the current literature. The stability analysis of the new algorithm is theoretically proven based on the positiveness of the data error covariance matrix. In order to improve the background error covariance matrices in ensemble-based data assimilation we explore the use of shrinkage covariance matrix estimators from ensembles. The resulting filter has attractive features in terms of both memory usage and computational complexity. Numerical results show that it performs better that traditional EnKF formulations. In geophysical applications the correlations between errors corresponding to distant model components decreases rapidly with the distance. We propose a new and efficient implementation of the EnKF based on a modified Cholesky decomposition for inverse covariance matrix estimation. This approach exploits the conditional independence of background errors between distant model components with regard to a predefined radius of influence. Consequently, sparse estimators of the inverse background error covariance matrix can be obtained. This implies huge memory savings during the assimilation process under realistic weather forecast scenarios. Rigorous error bounds for the resulting estimator in the context of data assimilation are theoretically proved. The conclusion is that the resulting estimator converges to the true inverse background error covariance matrix when the ensemble size is of the order of the logarithm of the number of model components. We explore high-performance implementations of the proposed EnKF algorithms. When the observational operator can be locally approximated for different regions of the domain, efficient parallel implementations of the EnKF formulations presented in this dissertation can be obtained. The parallel computation of the analysis increments is performed making use of domain decomposition. Local analysis increments are computed on (possibly) different processors. Once all local analysis increments have been computed they are mapped back onto the global domain to recover the global analysis. Tests performed with an atmospheric general circulation model at a T-63 resolution, and varying the number of processors from 96 to 2,048, reveal that the assimilation time can be decreased multiple fold for all the proposed EnKF formulations.Ensemble-based methods can be used to reformulate strong constraint four dimensional variational data assimilation such as to avoid the construction of adjoint models, which can be complicated for operational models. We propose a trust region approach based on ensembles in which the analysis increments are computed onto the space of an ensemble of snapshots. The quality of the resulting increments in the ensemble space is compared against the gains in the full space. Decisions on whether accept or reject solutions rely on trust region updating formulas. Results based on a atmospheric general circulation model with a T-42 resolution reveal that this methodology can improve the analysis accuracy. / Ph. D.

Page generated in 0.0424 seconds