• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 234
  • 84
  • 54
  • 32
  • 31
  • 26
  • 9
  • 7
  • 6
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 570
  • 101
  • 73
  • 59
  • 50
  • 48
  • 48
  • 46
  • 46
  • 44
  • 44
  • 38
  • 38
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Certain Diagonal Equations over Finite Fields

Sze, Christopher 29 May 2009 (has links)
Let Fqt be the finite field with qt elements and let F*qt be its multiplicative group. We study the diagonal equation axq−1 + byq−1 = c, where a,b and c ∈ F*qt. This equation can be written as xq−1+αyq−1 = β, where α, β ∈ F ∗ q t . Let Nt(α, β) denote the number of solutions (x,y) ∈ F*qt × F*qt of xq−1 + αyq−1 = β and I(r; a, b) be the number of monic irreducible polynomials f ∈ Fq[x] of degree r with f(0) = a and f(1) = b. We show that Nt(α, β) can be expressed in terms of I(r; a, b), where r | t and a, b ∈ F*q are related to α and β. A recursive formula for I(r; a, b) will be given and we illustrate this by computing I(r; a, b) for 2 ≤ r ≤ 4. We also show that N3(α, β) can be expressed in terms of the number of monic irreducible cubic polynomials over Fq with prescribed trace and norm. Consequently, N3(α, β) can be expressed in terms of the number of rational points on a certain elliptic curve. We give a proof that given any a, b ∈ F*q and integer r ≥ 3, there always exists a monic irreducible polynomial f ∈ Fq[x] of degree r such that f(0) = a and f(1) = b. We also use the result on N2(α, β) to construct a new family of planar functions.
132

Basis Reduction Algorithms and Subset Sum Problems

LaMacchia, Brian A. 01 June 1991 (has links)
This thesis investigates a new approach to lattice basis reduction suggested by M. Seysen. Seysen's algorithm attempts to globally reduce a lattice basis, whereas the Lenstra, Lenstra, Lovasz (LLL) family of reduction algorithms concentrates on local reductions. We show that Seysen's algorithm is well suited for reducing certain classes of lattice bases, and often requires much less time in practice than the LLL algorithm. We also demonstrate how Seysen's algorithm for basis reduction may be applied to subset sum problems. Seysen's technique, used in combination with the LLL algorithm, and other heuristics, enables us to solve a much larger class of subset sum problems than was previously possible.
133

The structure of langmuir monolayers probed with vibrational sum frequency spectroscopy

Gurau, Marc Cory 29 August 2005 (has links)
Langmuir monolayers can be employed as simple model systems to study interactions at surfaces. Such investigations are important to fields ranging from biology to materials science. Herein, several aspects of these films and their associated water structure have been examined with vibrational sum frequency spectroscopy (VSFS). This second order nonlinear optical spectroscopy is particularly well suited for simultaneous investigations of the monolayer and the associated water structure with unprecedented surface specificity. The structures of these systems were altered through the control of experimental parameters including monolayer pressure, subphase temperature, pH and ionic content. Thermodynamic information about structural changes in a fatty amine monolayer's hydrophobic region was obtained by observation of the pressure and temperature dependence of the monolayer's solid to liquid phase transition. Further studies used the coordination of divalent cations to acid monolayers to perturb the water layers nearest to the film which enabled a better understanding of the water related VSFS features from these hydrophilic interfaces. Information from both the monolayer and water structure was then combined in order to examine the role of water in mediating ion-biomaterial interactions, often expressed in terms of the Hofmeister series.
134

Coopetition(Competition and Cooperation) Strategy

Lu, Chin-long 02 August 2007 (has links)
Nowadays, the market environment is getting more radical and dynamic, such condition boosts up intensive competition make numerous organization toward red sea. Under such circumstance, however, the most important thing is not only to get survive but having additional earnings to sustain business. Thus lots organizations struggle to do renewal in strategizing, planning in order to differentiate themselves to outperform in their industry. Based on practical experience and integrate theoretical finds, the framework in analyzing how organization crafts sustainable capacity form coopetition(Äv¦X) is explored. Real practical case is illustrated in this study try to explain how L Company is integrated under a serious competitive condition and outperform then it shapes synergy from merging six parallel businesses into one big company. Through such merging story, finds are emerged as followings: 1. Small business can compete with big company by cooperation with others. 2. Relationship is critical element to integrate forces and accumulate common sense from business partners. 3. In integration, how effective result will come out through intensive compromising and negotiating among business partners. Keywords: Coopetition strategy, Red sea, Blue sea strategy, Zero-sum, Win-win.
135

Sum-rate maximization for active channels

Javad, Mirzaei 01 April 2013 (has links)
In conventional wireless channel models, there is no control on the gains of different subchannels. In such channels, the transmitted signal undergoes attenuation and phase shift and is subject to multi-path propagation effects. We herein refer to such channels as passive channels. In this dissertation, we study the problem of joint power allocation and channel design for a parallel channel which conveys information from a source to a destination through multiple orthogonal subchannels. In such a link, the power over each subchannel can be adjusted not only at the source but also at each subchannel. We refer to this link as an active parallel channel. For such a channel, we study the problem of sum-rate maximization under the assumption that the source power as well as the energy of the active channel are constrained. This problem is investigated for equal and unequal noise power at different subchannels. For equal noise power over different subchannels, although the sum-rate maximization problem is not convex, we propose a closed-form solution to this maximization problem. An interesting aspect of this solution is that it requires only a subset of the subchannels to be active and the remaining subchannels should be switched off. This is in contrast with passive parallel channels with equal subchannel signal-tonoise- ratios (SNRs), where water-filling solution to the sum-rate maximization under a total source power constraint leads to an equal power allocation among all subchannels. Furthermore, we prove that the number of active channels depends on the product of the source and channel powers. We also prove that if the total power available to the source and to the channel is limited, then in order to maximize the sum-rate via optimal power allocation to the source and to the active channel, half viii ix of the total available power should be allocated to the source and the remaining half should be allocated to the active channel. We extend our analysis to the case where the noise powers are unequal over different subchannels. we show that the sum-rate maximization problem is not convex. Nevertheless, with the aid of Karush-Kuhn-Tucker (KKT) conditions, we propose a computationally efficient algorithm for optimal source and channel power allocation. To this end, first, we obtain the feasible number of active subchannels. Then, we show that the optimal solution can be obtained by comparing a finite number of points in the feasible set and by choosing the best point which yields the best sum-rate performance. The worst-case computational complexity of this solution is linear in terms of number of subchannels. / UOIT
136

Identification of switched linear regression models using sum-of-norms regularization

Ohlsson, Henrik, Ljung, Lennart January 2013 (has links)
This paper proposes a general convex framework for the identification of switched linear systems. The proposed framework uses over-parameterization to avoid solving the otherwise combinatorially forbidding identification problem, and takes the form of a least-squares problem with a sum-of-norms regularization, a generalization of the ℓ1-regularization. The regularization constant regulates the complexity and is used to trade off the fit and the number of submodels. / <p>Funding Agencies|Swedish foundation for strategic research in the center MOVIII||Swedish Research Council in the Linnaeus center CADICS||European Research Council|267381|Sweden-America Foundation||Swedish Science Foundation||</p>
137

Generation, Characterization and Application of the 3rd and 4th Harmonics of a Ti:sapphire Femtosecond Laser

Wright, Peter 25 January 2012 (has links)
Femtosecond time-resolved photoelectron spectroscopy (fsTRPES) experiments have been used to study the photoelectron energy spectra of simple molecules since the 1980’s. Analysis of these spectra provides information about the ultrafast internal conversion dynamics of the parent ions. However, ultraviolet pulses must be used for these pump-probe experiments in order to ionize the molecules. Since current solid state lasers, such as the Ti:sapphire laser, typically produce pulses centered at 800nm, it is necessary to generate UV pulses with nonlinear frequency mixing techniques. I therefore constructed an optical setup to generate the 3rd and 4th harmonics, at 266.7nm and 200nm, respectively, of a Ti:sapphire (Ti:sa) chirped-pulse amplified (CPA) laser system that produces 35fs pulses centered at 800nm. Thin Beta-Barium Borate (β-BaB2O4 or BBO) crystals were chosen to achieve a compromise between short pulse durations and reasonable conversion efficiencies, since ultrashort pulses are quite susceptible to broadening from group velocity dispersion (GVD). Output energies of around 11μJ and 230nJ were measured for the 266.7nm and 200nm pulses, respectively. The transform limits of the 3rd and 4th harmonic pulse lengths were calculated from their measured spectral widths. We found that the 266.7nm bandwidth was large enough to support sub-30fs pulses, and due to cutting at the lower-wavelength end of the 200nm spectrum, we calculated an upper limit of 38fs. The pulses were compressed with pairs of CaF2 prisms to compensate for dispersion introduced by transmissive optics. Two-photon absorption (TPA) intensity autocorrelations revealed fully compressed pulse lengths of 36 ± 2 fs and 42 ± 4 fs for the 3rd and 4th harmonics, respectively.
138

A Survey On Known Algorithms In Solving Generalizationbirthday Problem (k-list)

Namaziesfanjani, Mina 01 February 2013 (has links) (PDF)
A well known birthday paradox is one the most important problems in cryptographic applications. Incremental hash functions or digital signatures in public key cryptography and low-weight parity check equations of LFSRs in stream ciphers are examples of such applications which benet from birthday problem theories to run their attacks. Wagner introduced and formulated the k-dimensional birthday problem and proposed an algorithm to solve the problem in O(k.m^ 1/log k ). The generalized birthday solutions used in some applications to break Knapsack based systems or collision nding in hash functions. The optimized birthday algorithms can solve Knapsack problems of dimension n which is believed to be NP-hard. Its equivalent problem is Subset Sum Problem nds the solution over Z/mZ. The main property for the classication of the problem is density. When density is small enough the problem reduces to shortest lattice vector problem and has a solution in polynomial time. Assigning a variable to each element of the lists, decoding them into a matrix and considering each row of the matrix as an equation lead us to have a multivariate polynomial system of equations and all solution of this type can be a solution for the k- list problem such as F4, F5, another strategy called eXtended Linearization (XL) and sl. We discuss the new approaches and methods proposed to reduce the complexity of the algorithms. For particular cases in over-determined systems, more equations than variables, regarding to have a single solutions Wolf and Thomea work to make a gradual decrease in the complexity of F5. Moreover, his group try to solve the problem by monomials of special degrees and linear equations for small lists. We observe and compare all suggested methods in this
139

Generation, Characterization and Application of the 3rd and 4th Harmonics of a Ti:sapphire Femtosecond Laser

Wright, Peter 25 January 2012 (has links)
Femtosecond time-resolved photoelectron spectroscopy (fsTRPES) experiments have been used to study the photoelectron energy spectra of simple molecules since the 1980’s. Analysis of these spectra provides information about the ultrafast internal conversion dynamics of the parent ions. However, ultraviolet pulses must be used for these pump-probe experiments in order to ionize the molecules. Since current solid state lasers, such as the Ti:sapphire laser, typically produce pulses centered at 800nm, it is necessary to generate UV pulses with nonlinear frequency mixing techniques. I therefore constructed an optical setup to generate the 3rd and 4th harmonics, at 266.7nm and 200nm, respectively, of a Ti:sapphire (Ti:sa) chirped-pulse amplified (CPA) laser system that produces 35fs pulses centered at 800nm. Thin Beta-Barium Borate (β-BaB2O4 or BBO) crystals were chosen to achieve a compromise between short pulse durations and reasonable conversion efficiencies, since ultrashort pulses are quite susceptible to broadening from group velocity dispersion (GVD). Output energies of around 11μJ and 230nJ were measured for the 266.7nm and 200nm pulses, respectively. The transform limits of the 3rd and 4th harmonic pulse lengths were calculated from their measured spectral widths. We found that the 266.7nm bandwidth was large enough to support sub-30fs pulses, and due to cutting at the lower-wavelength end of the 200nm spectrum, we calculated an upper limit of 38fs. The pulses were compressed with pairs of CaF2 prisms to compensate for dispersion introduced by transmissive optics. Two-photon absorption (TPA) intensity autocorrelations revealed fully compressed pulse lengths of 36 ± 2 fs and 42 ± 4 fs for the 3rd and 4th harmonics, respectively.
140

Survival analysis issues with interval-censored data

Oller Piqué, Ramon 30 June 2006 (has links)
L'anàlisi de la supervivència s'utilitza en diversos àmbits per tal d'analitzar dades que mesuren el temps transcorregut entre dos successos. També s'anomena anàlisi de la història dels esdeveniments, anàlisi de temps de vida, anàlisi de fiabilitat o anàlisi del temps fins a l'esdeveniment. Una de les dificultats que té aquesta àrea de l'estadística és la presència de dades censurades. El temps de vida d'un individu és censurat quan només és possible mesurar-lo de manera parcial o inexacta. Hi ha diverses circumstàncies que donen lloc a diversos tipus de censura. La censura en un interval fa referència a una situació on el succés d'interès no es pot observar directament i només tenim coneixement que ha tingut lloc en un interval de temps aleatori. Aquest tipus de censura ha generat molta recerca en els darrers anys i usualment té lloc en estudis on els individus són inspeccionats o observats de manera intermitent. En aquesta situació només tenim coneixement que el temps de vida de l'individu es troba entre dos temps d'inspecció consecutius.Aquesta tesi doctoral es divideix en dues parts que tracten dues qüestions importants que fan referència a dades amb censura en un interval. La primera part la formen els capítols 2 i 3 els quals tracten sobre condicions formals que asseguren que la versemblança simplificada pot ser utilitzada en l'estimació de la distribució del temps de vida. La segona part la formen els capítols 4 i 5 que es dediquen a l'estudi de procediments estadístics pel problema de k mostres. El treball que reproduïm conté diversos materials que ja s'han publicat o ja s'han presentat per ser considerats com objecte de publicació.En el capítol 1 introduïm la notació bàsica que s'utilitza en la tesi doctoral. També fem una descripció de l'enfocament no paramètric en l'estimació de la funció de distribució del temps de vida. Peto (1973) i Turnbull (1976) van ser els primers autors que van proposar un mètode d'estimació basat en la versió simplificada de la funció de versemblança. Altres autors han estudiat la unicitat de la solució obtinguda en aquest mètode (Gentleman i Geyer, 1994) o han millorat el mètode amb noves propostes (Wellner i Zhan, 1997).El capítol 2 reprodueix l'article d'Oller et al. (2004). Demostrem l'equivalència entre les diferents caracteritzacions de censura no informativa que podem trobar a la bibliografia i definim una condició de suma constant anàloga a l'obtinguda en el context de censura per la dreta. També demostrem que si la condició de no informació o la condició de suma constant són certes, la versemblança simplificada es pot utilitzar per obtenir l'estimador de màxima versemblança no paramètric (NPMLE) de la funció de distribució del temps de vida. Finalment, caracteritzem la propietat de suma constant d'acord amb diversos tipus de censura. En el capítol 3 estudiem quina relació té la propietat de suma constant en la identificació de la distribució del temps de vida. Demostrem que la distribució del temps de vida no és identificable fora de la classe dels models de suma constant. També demostrem que la probabilitat del temps de vida en cadascun dels intervals observables és identificable dins la classe dels models de suma constant. Tots aquests conceptes elsil·lustrem amb diversos exemples.El capítol 4 s'ha publicat parcialment en l'article de revisió metodològica de Gómez et al. (2004). Proporciona una visió general d'aquelles tècniques que s'han aplicat en el problema no paramètric de comparació de dues o més mostres amb dades censurades en un interval. També hem desenvolupat algunes rutines amb S-Plus que implementen la versió permutacional del tests de Wilcoxon, Logrank i de la t de Student per a dades censurades en un interval (Fay and Shih, 1998). Aquesta part de la tesi doctoral es complementa en el capítol 5 amb diverses propostes d'extensió del test de Jonckeere. Amb l'objectiu de provar una tendència en el problema de k mostres, Abel (1986) va realitzar una de les poques generalitzacions del test de Jonckheere per a dades censurades en un interval. Nosaltres proposem altres generalitzacions d'acord amb els resultats presentats en el capítol 4. Utilitzem enfocaments permutacionals i de Monte Carlo. Proporcionem programes informàtics per a cada proposta i realitzem un estudi de simulació per tal de comparar la potència de cada proposta sota diferents models paramètrics i supòsits de tendència. Com a motivació de la metodologia, en els dos capítols s'analitza un conjunt de dades d'un estudi sobre els beneficis de la zidovudina en pacients en els primers estadis de la infecció del virus VIH (Volberding et al., 1995).Finalment, el capítol 6 resumeix els resultats i destaca aquells aspectes que s'han de completar en el futur. / Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. Interval censoring refers to the situation when the event of interest cannot be directly observed and it is only known to have occurred during a random interval of time. This kind of censoring has produced a lot of work in the last years and typically occurs for individuals in a study being inspected or observed intermittently, so that an individual's lifetime is known only to lie between two successive observation times.This PhD thesis is divided into two parts which handle two important issues of interval censored data. The first part is composed by Chapter 2 and Chapter 3 and it is about formal conditions which allow estimation of the lifetime distribution to be based on a well known simplified likelihood. The second part is composed by Chapter 4 and Chapter 5 and it is devoted to the study of test procedures for the k-sample problem. The present work reproduces several material which has already been published or has been already submitted.In Chapter 1 we give the basic notation used in this PhD thesis. We also describe the nonparametric approach to estimate the distribution function of the lifetime variable. Peto (1973) and Turnbull (1976) were the first authors to propose an estimation method which is based on a simplified version of the likelihood function. Other authors have studied the uniqueness of the solution given by this method (Gentleman and Geyer, 1994) or have improved it with new proposals (Wellner and Zhan, 1997).Chapter 2 reproduces the paper of Oller et al. (2004). We prove the equivalence between different characterizations of noninformative censoring appeared in the literature and we define an analogous constant-sum condition to the one derived in the context of right censoring. We prove as well that when the noninformative condition or the constant-sum condition holds, the simplified likelihood can be used to obtain the nonparametric maximum likelihood estimator (NPMLE) of the failure time distribution function. Finally, we characterize the constant-sum property according to different types of censoring. In Chapter 3 we study the relevance of the constant-sum property in the identifiability of the lifetime distribution. We show that the lifetime distribution is not identifiable outside the class of constant-sum models. We also show that the lifetime probabilities assigned to the observable intervals are identifiable inside the class of constant-sum models. We illustrate all these notions with several examples.Chapter 4 has partially been published in the survey paper of Gómez et al. (2004). It gives a general view of those procedures which have been applied in the nonparametric problem of the comparison of two or more interval-censored samples. We also develop some S-Plus routines which implement the permutational version of the Wilcoxon test, the Logrank test and the t-test for interval censored data (Fay and Shih, 1998). This part of the PhD thesis is completed in Chapter 5 by different proposals of extension of the Jonckeere's test. In order to test for an increasing trend in the k-sample problem, Abel (1986) gives one of the few generalizations of the Jonckheree's test for interval-censored data. We also suggest different Jonckheere-type tests according to the tests presented in Chapter 4. We use permutational and Monte Carlo approaches. We give computer programs for each proposal and perform a simulation study in order compare the power of each proposal under different parametric assumptions and different alternatives. We motivate both chapters with the analysis of a set of data from a study of the benefits of zidovudine in patients in the early stages of the HIV infection (Volberding et al., 1995).Finally, Chapter 6 summarizes results and address those aspects which remain to be completed.

Page generated in 0.0609 seconds