• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 228
  • 64
  • 42
  • 19
  • 18
  • 14
  • 11
  • 8
  • 6
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 499
  • 92
  • 92
  • 86
  • 86
  • 83
  • 73
  • 66
  • 63
  • 59
  • 51
  • 43
  • 41
  • 39
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Análise da equiparação salarial no trabalho artístico do ator a partir da fenomenologia dos fatos jurídicos

Nakamura, Miliana Sanchez 24 June 2010 (has links)
Made available in DSpace on 2016-04-26T20:30:35Z (GMT). No. of bitstreams: 1 Miliana Sanchez Nakamura.pdf: 940661 bytes, checksum: c47b3e092443e9099813678be2f5ee60 (MD5) Previous issue date: 2010-06-24 / Salary parity is a legal obligation deriving from principles of equality that aims to prevent discrimination in the workplace and to get the social justice and the equality to employees. For all this, in the first moment, the salary parity rule is applied indiscriminately to all employees. This work analyzes the world of actor s artistic job, with all related facts in this job, using the phenomenology theory of legal facts for checking if in this case there is phatic support when the work of one actor is comparable with the work of another actor. The phenomenology theory of legal facts has been applied to the analysis of salary parity in the acting industry in order to help law professionals to analyze and to understand the facts related to the world of actor s artistic job, diminishing the legal uncertainty generated by the lack of consensus regarding the matter / A equiparação salarial é uma regra jurídica que tem como origem princípios de igualdade que objetivam coibir atos discriminatórios, alcançar a justiça social e incentivar tratamentos isonômicos aos empregados. Neste contexto, a regra da equiparação salarial se estende, em um primeiro momento, a todos os empregados indistintamente. O presente trabalho analisa o universo do trabalho artístico do ator, com os fatos a ele relacionados, para, aí então, por meio da aplicação da Teoria da Fenomenologia dos Fatos Jurídicos, verificar, dentro do rigor legal, se há incidência do suporte fático da norma, quando se compara o trabalho de atores. A aplicação da Teoria da Fenomenologia dos Fatos Jurídicos ao instituto da equiparação salarial do trabalho artístico do ator objetiva servir como instrumento auxiliar de análise aos operadores do direito em relação à categoria profissional dos atores, diminuindo, assim, a insegurança jurídica gerada pela não uniformidade de entendimentos até então sobre o tema
342

Efficient architectures for error control using low-density parity-check codes

Haley , David January 2004 (has links)
Recent designs for low-density parity-check (LDPC) codes have exhibited capacity approaching performance for large block length, overtaking the performance of turbo codes. While theoretically impressive, LDPC codes present some challenges for practical implementation. In general, LDPC codes have higher encoding complexity than turbo codes both in terms of computational latency and architecture size. Decoder circuits for LDPC codes have a high routing complexity and thus demand large amounts of circuit area. There has been recent interest in developing analog circuit architectures suitable for decoding. These circuits offer a fast, low-power alternative to the digital approach. Analog decoders also have the potential to be significantly smaller than digital decoders. In this thesis we present a novel and efficient approach to LDPC encoder / decoder (codec) design. We propose a new algorithm which allows the parallel decoder architecture to be reused for iterative encoding. We present a new class of LDPC codes which are iteratively encodable, exhibit good empirical performance, and provide a flexible choice of code length and rate. Combining the analog decoding approach with this new encoding technique, we design a novel time-multiplexed LDPC codec, which switches between analog decode and digital encode modes. In order to achieve this behaviour from a single circuit we have developed mode-switching gates. These logic gates are able to switch between analog (soft) and digital (hard) computation, and represent a fundamental circuit design contribution. Mode-switching gates may also be applied to built-in self-test circuits for analog decoders. Only a small overhead in circuit area is required to transform the analog decoder into a full codec. The encode operation can be performed two orders of magnitude faster than the decode operation, making the circuit suitable for full-duplex applications. Throughput of the codec scales linearly with block size, for both encode and decode operations. The low power and small area requirements of the circuit make it an attractive option for small portable devices.
343

Large Scale Content Delivery applied to Files and Videos

Neumann, Christoph 14 December 2005 (has links) (PDF)
Le multicast fiable est certainement la solution la plus efficace pour la distribution de contenu via un<br />tres grand nombre (potentiellement des millions) de recepteurs. Dans cette perspective les protocoles<br />ALC et FLUTE, standardises via l'IETF (RMT WG), ont ete adoptes dans 3GPP/MBMS et dans le<br />DVB-H IP-Datacast dans les contextes des reseaux cellulaires 3G.<br />Ce travail se concentre sur le multicast fiable et a comme requis principal le passage l'echelle massif<br />en terme de nombre de clients. Cette these se base sur les solutions proposees via l'IETF RMT WG.<br />Ces protocoles de multicast fiable sont construit autour de plusieurs briques de base que nous avons<br />etudie en detail:<br />* La brique Forward Error Correction (FEC) :<br />Nous examinons la classe de codes grands blocs Low Density Parity Check (LDPC). Nous concevons<br />des derivees de ces codes, et les analysons en detail. Nous en concluons que les codes<br />LDPC et leur implementation ont des performances tres prometteuses, surtout si ils sont utilisees<br />avec des fichiers de taille importante.<br />* La brique controle de congestion :<br />Nous examinons le comportement dans la phase de demarrage de trois protocoles de controle de<br />congestion RLC, FLID-SL, WEBRC. Nous demontrons que la phase de demarrage a un grand<br />impact sur les performances de telechargement.<br />Cette these a aussi plusieurs contributions au niveau applicatif:<br />* Extensions de FLUTE :<br />Nous proposons un mecanisme permettant d'agreger plusieurs fichiers dans le protocole FLUTE.<br />Ceci ameliore les performance de transmission.<br />* Streaming video :<br />Nous proposons SVSoA, une solution de streaming base sur ALC. Cette approche beneficie de<br />tout les avantages de ALC en terme de passage a l'echelle, controle de congestion et corrections<br />d'erreurs.<br /><br />Mots cles : Multicast fiable, FLUTE, ALC, codes correcteur d'erreurs, Forward Error Correction<br />(FEC), Low Density Parity Check (LDPC) Codes, diffusion de contenu
344

Low-density Parity-Check decoding Algorithms / Low-density Parity-Check avkodare algoritm

Pirou, Florent January 2004 (has links)
<p>Recently, low-density parity-check (LDPC) codes have attracted much attention because of their excellent error correcting performance and highly parallelizable decoding scheme. However, the effective VLSI implementation of and LDPC decoder remains a big challenge and is a crucial issue in determining how well we can exploit the benefits of the LDPC codes in the real applications. In this master thesis report, following a error coding background, we describe Low-Density Parity-Check codes and their decoding algorithm, and also requirements and architectures of LPDC decoder implementations.</p>
345

Efficient Message Passing Decoding Using Vector-based Messages

Grimnell, Mikael, Tjäder, Mats January 2005 (has links)
<p>The family of Low Density Parity Check (LDPC) codes is a strong candidate to be used as Forward Error Correction (FEC) in future communication systems due to its strong error correction capability. Most LDPC decoders use the Message Passing algorithm for decoding, which is an iterative algorithm that passes messages between its variable nodes and check nodes. It is not until recently that computation power has become strong enough to make Message Passing on LDPC codes feasible. Although locally simple, the LDPC codes are usually large, which increases the required computation power. Earlier work on LDPC codes has been concentrated on the binary Galois Field, GF(2), but it has been shown that codes from higher order fields have better error correction capability. However, the most efficient LDPC decoder, the Belief Propagation Decoder, has a squared complexity increase when moving to higher order Galois Fields. Transmission over a channel with M-PSK signalling is a common technique to increase spectral efficiency. The information is transmitted as the phase angle of the signal.</p><p>The focus in this Master’s Thesis is on simplifying the Message Passing decoding when having inputs from M-PSK signals transmitted over an AWGN channel. Symbols from higher order Galois Fields were mapped to M-PSK signals, since M-PSK is very bandwidth efficient and the information can be found in the angle of the signal. Several simplifications of the Belief Propagation has been developed and tested. The most promising is the Table Vector Decoder, which is a Message Passing Decoder that uses a table lookup technique for check node operations and vector summation as variable node operations. The table lookup is used to approximate the check node operation in a Belief Propagation decoder. Vector summation is used as an equivalent operation to the variable node operation. Monte Carlo simulations have shown that the Table Vector Decoder can achieve a performance close to the Belief Propagation. The capability of the Table Vector Decoder depends on the number of reconstruction points and the placement of them. The main advantage of the Table Vector Decoder is that its complexity is unaffected by the Galois Field used. Instead, there will be a memory space requirement which depends on the desired number of reconstruction points.</p>
346

Graphical representations of Ising and Potts models : Stochastic geometry of the quantum Ising model and the space-time Potts model

Björnberg, Jakob Erik January 2009 (has links)
HTML clipboard Statistical physics seeks to explain macroscopic properties of matter in terms of microscopic interactions. Of particular interest is the phenomenon of phase transition: the sudden changes in macroscopic properties as external conditions are varied. Two models in particular are of great interest to mathematicians, namely the Ising model of a magnet and the percolation model of a porous solid. These models in turn are part of the unifying framework of the random-cluster representation, a model for random graphs which was first studied by Fortuin and Kasteleyn in the 1970’s. The random-cluster representation has proved extremely useful in proving important facts about the Ising model and similar models. In this work we study the corresponding graphical framework for two related models. The first model is the transverse field quantum Ising model, an extension of the original Ising model which was introduced by Lieb, Schultz and Mattis in the 1960’s. The second model is the space–time percolation process, which is closely related to the contact model for the spread of disease. In Chapter 2 we define the appropriate space–time random-cluster model and explore a range of useful probabilistic techniques for studying it. The space– time Potts model emerges as a natural generalization of the quantum Ising model. The basic properties of the phase transitions in these models are treated in this chapter, such as the fact that there is at most one unbounded fk-cluster, and the resulting lower bound on the critical value in <img src="http://upload.wikimedia.org/math/a/b/8/ab820da891078a8245d7f4f3252aee4f.png" />. In Chapter 3 we develop an alternative graphical representation of the quantum Ising model, called the random-parity representation. This representation is based on the random-current representation of the classical Ising model, and allows us to study in much greater detail the phase transition and critical behaviour. A major aim of this chapter is to prove sharpness of the phase transition in the quantum Ising model—a central issue in the theory— and to establish bounds on some critical exponents. We address these issues by using the random-parity representation to establish certain differential inequalities, integration of which gives the results. In Chapter 4 we explore some consequences and possible extensions of the results established in Chapters 2 and 3. For example, we determine the critical point for the quantum Ising model in <img src="http://upload.wikimedia.org/math/a/b/8/ab820da891078a8245d7f4f3252aee4f.png" /> and in ‘star-like’ geometries. / HTML clipboard Statistisk fysik syftar till att förklara ett materials makroskopiska egenskaper i termer av dess mikroskopiska struktur. En särskilt intressant egenskap är är fenomenet fasövergång, det vill säga en plötslig förändring i de makroskopiska egenskaperna när externa förutsättningar varieras. Två modeller är särskilt intressanta för en matematiker, nämligen Ising-modellen av en magnet och perkolationsmodellen av ett poröst material. Dessa två modeller sammanförs av den så-kallade fk-modellen, en slumpgrafsmodell som först studerades av Fortuin och Kasteleyn på 1970-talet. fk-modellen har sedermera visat sig vara extremt användbar för att bevisa viktiga resultat om Ising-modellen och liknande modeller. I den här avhandlingen studeras den motsvarande grafiska strukturen hos två näraliggande modeller. Den första av dessa är den kvantteoretiska Isingmodellen med transverst fält, vilken är en utveckling av den klassiska Isingmodellen och först studerades av Lieb, Schultz och Mattis på 1960-talet. Den andra modellen är rumtid-perkolation, som är nära besläktad med kontaktmodellen av infektionsspridning. I Kapitel 2 definieras rumtid-fk-modellen, och flera probabilistiska verktyg utforskas för att studera dess grundläggande egenskaper. Vi möter rumtid-Potts-modellen, som uppenbarar sig som en naturlig generalisering av den kvantteoretiska Ising-modellen. De viktigaste egenskaperna hos fasövergången i dessa modeller behandlas i detta kapitel, exempelvis det faktum att det i fk-modellen finns högst en obegränsad komponent, samt den undre gräns för det kritiska värdet som detta innebär. I Kapitel 3 utvecklas en alternativ grafisk framställning av den kvantteoretiska Ising-modellen, den så-kallade slumpparitetsframställningen. Denna är baserad på slumpflödesframställningen av den klassiska Ising-modellen, och är ett verktyg som låter oss studera fasövergången och gränsbeteendet mycket närmare. Huvudsyftet med detta kapitel är att bevisa att fasövergången är skarp—en central egenskap—samt att fastslå olikheter för vissa kritiska exponenter. Metoden består i att använda slumpparitetsframställningen för att härleda vissa differentialolikheter, vilka sedan kan integreras för att lägga fast att gränsen är skarp. I Kapitel 4 utforskas några konsekvenser, samt möjliga vidareutvecklingar, av resultaten i de tidigare kapitlen. Exempelvis bestäms det kritiska värdet hos den kvantteoretiska Ising-modellen på <img src="http://upload.wikimedia.org/math/a/b/8/ab820da891078a8245d7f4f3252aee4f.png" /> , samt i ‘stjärnliknankde’ geometrier. / QC 20100705
347

The relationship between carry trade currencies and equity markets, during the 2003-2012 time period

Dumitrescu, Andrei, Tuovila, Antti January 2013 (has links)
One of the most popular investment and trading strategies over the last decade, has been the currency carry trade, which allows traders and investors to buy high-yielding currencies in the Foreign Exchange spot market by borrowing, low or zero interest rate currencies in the form of pairs, such as the Australian Dollar/Japanese Yen (AUD/JPY), with the purpose of investing the proceeds afterwards into fixed-income securities.To be able to determine the causality between the returns of equity markets and the foreign exchange market, we choose to observe the sensitivity and influence of two equity indexes on several pairs involved in carry trading. The reason for studying these relationships is to further explain the causes of the uncovered interest parity puzzle, thus adding our contribution to the academic field through this thesis.To accomplish our goals, data was gathered for daily quotes of 16 different currency pairs, grouped by interest differentials, and two equity indexes, the S&amp;P 500 and FTSE All-World, along with data for the VIX volatility index, for the 2003-2012 period. The data was collected from Thomson Reuters Datastream and the selected ten year span was divided into three different periods. This was done in order to discover the differences on how equity indexes relate to typical carry trade currency pairs, depending on market developments before, during and after the world financial crisis.The tests conducted on the collected data measured the correlations, influences and sensitivity for the 16 different currency pairs with the S&amp;P 500 Index, the FTSE All-World index, and the volatility index between the years of 2003-2012. For influences and sensitivity, we performed Maximum Likelihood (ML) regressions with Generalized Autoregressive Conditional Heteroscedasticity (GARCH) [1,1], in Eviews software.After analyzing the results, we found that, during our chosen time period, the majority of currency pair daily returns are positively correlated with the equity indexes and that the FX pairs show greater correlation with the FTSE All-World, than with the S&amp;P 500. Factors such as the interest rate of a currency and the choice of funding currency played an important role in the foreign exchange markets, during the ten year time span, for every yield group of FX pairs.Regarding the influence and sensitivity between currency pairs and the S&amp;P 500 with its VIX index, we found that our models explanatory power seems to be stronger when the interest rate differential between the currency pairs is smaller. Our regression analysis also uncovered that the characteristics of an individual currency can show noticeable effects for the relationship between its pair and the two indexes.
348

On Non-Binary Constellations for Channel Encoded Physical Layer Network Coding

Faraji-Dana, Zahra 18 April 2012 (has links)
This thesis investigates channel-coded physical layer network coding, in which the relay directly transforms the noisy superimposed channel-coded packets received from the two end nodes, to the network-coded combination of the source packets. This is in contrast to the traditional multiple-access problem, in which the goal is to obtain each message explicitly at the relay. Here, the end nodes $A$ and $B$ choose their symbols, $S_A$ and $S_B$, from a small non-binary field, $\mathbb{F}$, and use non-binary PSK constellation mapper during the transmission phase. The relay then directly decodes the network-coded combination ${aS_A+bS_B}$ over $\mathbb{F}$ from the noisy superimposed channel-coded packets received from two end nodes. Trying to obtain $S_A$ and $S_B$ explicitly at the relay is overly ambitious when the relay only needs $aS_B+bS_B$. For the binary case, the only possible network-coded combination, ${S_A+S_B}$ over the binary field, does not offer the best performance in several channel conditions. The advantage of working over non-binary fields is that it offers the opportunity to decode according to multiple decoding coefficients $(a,b)$. As only one of the network-coded combinations needs to be successfully decoded, a key advantage is then a reduction in error probability by attempting to decode against all choices of decoding coefficients. In this thesis, we compare different constellation mappers and prove that not all of them have distinct performance in terms of frame error rate. Moreover, we derive a lower bound on the frame error rate performance of decoding the network-coded combinations at the relay. Simulation results show that if we adopt concatenated Reed-Solomon and convolutional coding or low density parity check codes at the two end nodes, our non-binary constellations can outperform the binary case significantly in the sense of minimizing the frame error rate and, in particular, the ternary constellation has the best frame error rate performance among all considered cases.
349

The Relationship Between Competitive Balance and Revenue in America's Two Largest Sports Leagues

Pautler, Matt D. 01 January 2010 (has links)
This paper looks at the impact that competitive balance has on team revenues. The hypothesis that this paper is operating under is that higher levels of competitive balance will lead to higher levels of revenue. Two different measures of competitive balance will be used and regressions will be run to investigate whether high levels of the competitive balance measure are associated with high levels of revenue. The results of the data indicated that over all three time horizons (ten year, five year, and two year), high levels of variability in playoff appearances were associated with high revenue for Major League Baseball (MLB) teams. The results also indicate that over a two year time span, high standard deviation in winning percentage were associated with higher revenue in both MLB and the National Football League (NFL) and also that high standard deviation of winning percentage over a ten year period were associated with lower revenues in the NFL. The data provides consistent support for the hypothesis of a positive relationship between competitive balance and revenue in MLB and inconsistent support in the NFL. This inconsistent relationship in the NFL is hypothesized to be due to differences in time horizons. Over the short term, fans like to see high variability in winning percentage because it gives them faith that their team will be good the next season. In the long term however, fans do not like a lot of variability in their team and would rather see a consistent winner.
350

Development of high-efficiency silicon solar cells and modeling the impact of system parameters on levelized cost of electricity

Kang, Moon Hee 02 April 2013 (has links)
The objective of this thesis is to develop low-cost high-efficiency crystalline silicon solar cells which are at the right intersection of cost and performance to make photovoltaics (PV) affordable. The goal was addressed by improving the optical and electrical performance of silicon solar cells through process optimization, device modeling, clever cell design, fundamental understanding, and minimization of loss mechanisms. To define the right intersection of cost and performance, analytical models to assess the premium or value associated with efficiency, temperature coefficient, balance of system cost, and solar insolation were developed and detailed cost analysis was performed to quantify the impact of key system and financial parameters in the levelized cost of electricity from PV.

Page generated in 0.0575 seconds