• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2077
  • 469
  • 321
  • 181
  • 169
  • 71
  • 68
  • 65
  • 53
  • 51
  • 49
  • 43
  • 28
  • 23
  • 22
  • Tagged with
  • 4366
  • 717
  • 538
  • 529
  • 506
  • 472
  • 432
  • 408
  • 390
  • 323
  • 316
  • 306
  • 296
  • 286
  • 275
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
681

Balanced codes

Al-Bassam, Sulaiman 04 January 1990 (has links)
Balanced codes, in which each codeword contains equally many 1's and 0's, are useful in such applications as in optical transmission and optical recording. When balanced codes are used, the same number of 1's and 0's pass through the channel after the transmission of every word, so the channel is in a dc-null state. Optical channels require this property because they employ AC-coupled devices. Line codes, in which codewords may not be balanced, are also used as dc-free codes in such channels. In this thesis we present the research that leads to the following results: 1- Balanced codes These have higher information rate than existing codes yet maintain similar encoding and decoding complexities. 2- Error-correcting balanced codes In many cases, these give higher information rates and more efficient encoding and decoding algorithms than the best-known equivalent codes. 3- DC-Free coset codes A new technique to design dc-free coset codes was developed. These codes have better properties than existing ones. 4- Generalization of balanced codes -- Balanced codes are generalized in three ways among which the first is the most significant: a) Balanced codes with low dc level These codes are designed based on the combined techniques used in (1) and (3) above. A lower dc-level and higher transitions density is achieved at the cost of one extra check bit. These codes are much more attractive, to optical transmission, than the bare-bone balanced codes. b) Non-Binary Balanced Codes Balanced codes over a non-binary alphabet. c) Semi-Balanced Codes -- Codes in which the number of 1's and 0's in every code word differs by at most a certain value. 5- t-EC/AUED coset codes These are t error correcting/all unidirectional error detecting codes. Again the technique in (3) above is used to design t-EC/AUED coset codes. These codes obtain higher information rate than the best-known equivalent codes and yet maintain the same encoding/decoding complexity. / Graduation date: 1990
682

Wireless Communication over Fading Channels with Imperfect Channel Estimates

Basri, Amir Ali 19 January 2009 (has links)
In wireless communication systems, transmitted signals are corrupted by fading as well as noise. The receiver can benefit from the estimates of fading channels to detect the transmitted symbols. However, in practical wireless systems channel information cannot be estimated perfectly at the receiver. Therefore, it is crucial to examine the effect of channel estimation error on the structure and performance of the receivers. In the first part of the thesis, we study single-user systems with single-antenna reception over fading channels in the presence of Gaussian-distributed channel estimation error. By using the statistical information of the channel estimation error, we will derive the structure of maximum-likelihood receivers for a number of different modulation formats and then analyze their performance over fading channels. In the second part of the thesis, we consider the uplink of multi-user wireless systems with multi-antenna reception. For conventional diversity combining techniques such as maximal ratio combining and optimum combining we analyze the performance degradation due to imperfect channel estimates in the presence of multiple interfering users for several fading channels. By investigating the probability density function of the output signal-to-interference ratio, we will derive analytical expressions for several performance measures such as the average signal-to-interference ratio, outage probability and average bit-error probability. These expressions quantify performance degradation due to channel estimation error.
683

Projekthantering : En Grounded Theory-undersökning av tre innovativa företag

Brunnström, Linus, Olofsson, Tommy January 2013 (has links)
Vi har alltsedan våra praktikperioder haft en grundläggande förståelse för hur mankan arbeta inom organisationer där innovation är ett centralt begrepp. Vi har dockaldrig kommit i kontakt med beslutsprocesserna runt dessa projekt. Efter praktikenhar vi ifrågasatt de värderingsmetoder som vi tidigare sett som relativt heltäckande.Vi är båda aktiva på börsen vilket är ytterligare en faktor som ökat vårt intresse förämnet då börsens värdering av bolag med hög risk och hög innovationshöjd ärosäker. Där saknas analyser som belyser problemet. Detta har lett till att vi söktvärderingar på dessa bolag med hjälp av olika metoder och ökat förståelsen för hurextern värdering går till.Författarnas studier har naturligtvis inspirerat valet av uppsatsämne. Studierna harockså påverkat hur vi ser på värdering och vilka resultat vi förväntar oss av studien.Vi har läst finansiering där värdering är väldigt centralt eftersom det är viktigt att sättamöjligheter i relation till varandra. Här har vi under hösten berört flera metoder där deflesta är baserade på diskonterade kassaflöden. Den andra parametern förutomkassaflöden är risken. Eftersom risk och avkastning alltid ska följa varandra så måsteavkastningen vara hög nog för risknivån. Risken har vi beräknat med hjälp av CAPMeller WACC. Dessa metoder använder sig av antingen marknadsförhållanden ellermer specifika förhållanden inom bolaget för att mäta risken. Vi har också kommit ikontakt med BASEL-fördragen och därmed Value-at-Riskmåtten som beräknar hurmycket man riskerar att förlora under en viss tid med en specificerad säkerhet. Medhjälp av dessa metoder och genom att jämföra nyckeltal har vi värderat allt frånprojekt till portföljer. Vi har alltid haft kassaflöden som grund för beräkningarna dockmed osäkerhet kring omfattningen av dessa.Under våra praktikperioder så arbetade vi båda två i väldigt innovativa miljöer. Ingenav oss kom i direktkontakt med beslutsprocessen då man valde mellan projekt, menett intresse för hur det gick till föddes. Bristen på förklaringar hur man hanteradeprojekt utan säkra kassaflöden under vår utbildning och även i de rapporter vi läst harbara ökat vårt intresse kring företag och deras projektval.Eftersom vi vill sätta oss in i hur beslutsprocessen ser ut på innovativa företag har vivalt en abduktiv metod, dvs. att utgå från vår data och teori för att utforska ettforskningsgap. Vår förhoppning är att den här studien kan vara en inspirationskälla tillandra abduktiva eller induktiva studier liksom vara en grund för vidare deduktivforskning i ämnet.Vi vill också passa på att tacka de medverkande företagen och personerna viintervjuat. Vi vill också rikta ett tack till vår handledare Anders Isaksson för all hjälpgenom denna snåriga process.
684

On Constructing Low-Density Parity-Check Codes

Ma, Xudong January 2007 (has links)
This thesis focuses on designing Low-Density Parity-Check (LDPC) codes for forward-error-correction. The target application is real-time multimedia communications over packet networks. We investigate two code design issues, which are important in the target application scenarios, designing LDPC codes with low decoding latency, and constructing capacity-approaching LDPC codes with very low error probabilities. On designing LDPC codes with low decoding latency, we present a framework for optimizing the code parameters so that the decoding can be fulfilled after only a small number of iterative decoding iterations. The brute force approach for such optimization is numerical intractable, because it involves a difficult discrete optimization programming. In this thesis, we show an asymptotic approximation to the number of decoding iterations. Based on this asymptotic approximation, we propose an approximate optimization framework for finding near-optimal code parameters, so that the number of decoding iterations is minimized. The approximate optimization approach is numerically tractable. Numerical results confirm that the proposed optimization approach has excellent numerical properties, and codes with excellent performance in terms of number of decoding iterations can be obtained. Our results show that the numbers of decoding iterations of the codes by the proposed design approach can be as small as one-fifth of the numbers of decoding iterations of some previously well-known codes. The numerical results also show that the proposed asymptotic approximation is generally tight for even non-extremely limiting cases. On constructing capacity-approaching LDPC codes with very low error probabilities, we propose a new LDPC code construction scheme based on $2$-lifts. Based on stopping set distribution analysis, we propose design criteria for the resulting codes to have very low error floors. High error floors are the main problems of previously constructed capacity-approaching codes, which prevent them from achieving very low error probabilities. Numerical results confirm that codes with very low error floors can be obtained by the proposed code construction scheme and the design criteria. Compared with the codes by the previous standard construction schemes, which have error floors at the levels of $10^{-3}$ to $10^{-4}$, the codes by the proposed approach do not have observable error floors at the levels higher than $10^{-7}$. The error floors of the codes by the proposed approach are also significantly lower compared with the codes by the previous approaches to constructing codes with low error floors.
685

Residual-Based Isotropic and Anisotropic Mesh Adaptation for Computational Fluid Dynamics

Baserinia, Amir Reza January 2008 (has links)
The accuracy of a fluid flow simulation depends not only on the numerical method used for discretizing the governing equations, but also on the distribution and topology of the mesh elements. Mesh adaptation is a technique for automatically modifying the mesh in order to improve the simulation accuracy in an attempt to reduce the manual work required for mesh generation. The conventional approach to mesh adaptation is based on a feature-based criterion that identifies the distinctive features in the flow field such as shock waves and boundary layers. Although this approach has proved to be simple and effective in many CFD applications, its implementation may require a lot of trial and error for determining the appropriate criterion in certain applications. An alternative approach to mesh adaptation is the residual-based approach in which the discretization error of the fluid flow quantities across the mesh faces is used to construct an adaptation criterion. Although this approach provides a general framework for developing robust mesh adaptation criteria, its incorporation leads to significant computational overhead. The main objective of the thesis is to present a methodology for developing an appropriate mesh adaptation criterion for fluid flow problems that offers the simplicity of a feature-based criterion and the robustness of a residual-based criterion. This methodology is demonstrated in the context of a second-order accurate cell-centred finite volume method for simulating laminar steady incompressible flows of constant property fluids. In this methodology, the error of mass and momentum flows across the faces of each control volume are estimated with a Taylor series analysis. Then these face flow errors are used to construct the desired adaptation criteria for triangular isotropic meshes and quadrilateral anisotropic meshes. The adaptation results for the lid-driven cavity flow show that the solution error on the resulting adapted meshes is 80 to 90 percent lower than that of a uniform mesh with the same number of control volumes. The advantage of the proposed mesh adaptation method is the capability to produce meshes that lead to more accurate solutions compared to those of the conventional methods with approximately the same amount of computational effort.
686

Standardization and use of colour for labelling of injectable drugs

Jeon, Hyae Won Jennifer January 2008 (has links)
Medication errors are one of the most common causes of patient injuries in healthcare systems. Poor labelling has been identified as a contributing factor of medication errors, particularly for those involving injectable drugs. Colour coding and colour differentiation are two major techniques being used on labels to aid drug identification. However, neither approach has been scientifically proven to minimize the occurrence of or harm from medication errors. This thesis investigates potential effects of different approaches for using colour on standardized labels on the task of identifying a specific drug from a storage area via a controlled experiment involving human users. Three different ways of using colour were compared: labels where only black, white and grey are used; labels where a unique colour scheme adopted from an existing manufacturer’s label is applied to each drug; colour coded labels based on the product’s strength level within the product line. The results show that people might be vulnerable to confusion from drugs that have look-alike labels and also have look-alike, sound-alike drug names. In particular, when each drug label had a fairly unique colour scheme, participants were more prone to misperceive the look-alike, sound-alike drug name as the correct drug name than when no colour was used or when colour was used on the labels with no apparent one-to-one association between the label colour and the drug identity. This result could suggest a perceptual bias to perceive stimuli as the expected stimuli especially when the task involved is familiar and the stimuli look similar to the expected stimuli. Moreover, the results suggest a potential problem that may arise from standardizing existing labels if careful consideration is not given to the effects of reduced visual variations among the labels of different products on how the colours of the labels are perceived and used for drug identification. The thesis concludes with recommendations for improving the existing standard for labelling of injectable drug containers and for avoiding medication errors due to labelling and packaging in general.
687

Error Detection in Number-Theoretic and Algebraic Algorithms

Vasiga, Troy Michael John January 2008 (has links)
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms. Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories: 1. Correct and feasible -- the algorithm computes the correct result, 2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O, 3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O. Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues. Moreover, we show that typically, there will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
688

Fish (Oreochromis niloticus) as a Model of Refractive Error Development

Shen, Wei January 2008 (has links)
Myopia is a common ocular condition worldwide and the mechanism of myopia is still not clear. A number of animal models of myopia and refractive error development have been proposed. The fact that form deprivation myopia could be induced in tilapia fish, as shown previously in my research, suggests the possibility that tilapia could be a new animal model for myopia research. In the first part of this thesis the tilapia model was perfected and then, based on this model, the effect of systemic hormones (thyroid hormones) associated with eye and body development was investigated during refractive error development. Lastly, the physiological and morphological changes on the retina were further studied with optical coherence tomography (OCT). In these experiments, significant amounts of myopia, and hyperopia were induced within two weeks using goggles with lens inserts as in other higher vertebrate animal models, e.g. chicks. The results from form deprivation treatment also show that the sensitivity of tilapia eyes may be an age related effect during the emmetropization process. The larger the fish, the less hyperopic the fish eye, though the small eye artefact may be a factor. The susceptibility of the refractive development of the eye to the visual environment may be also linked to plasma hormone levels. It was found that induced refractive errors could be shifted in the hyperopic direction with high levels of thyroid hormones. Also, after 2 weeks of treatment with negative or positive lens/goggles, the tilapia retina becomes thinner or thicker, respectively. When the goggles are removed, the thickness of the retina changes within hours and gradually returns to normal. However, the circadian retinomotor movement is a complicating factor since it affects the retinal thickness measurement with OCT at some time points. In conclusion, tilapia represent a good lower vertebrate model for myopia research, suggesting a universal mechanism of myopia development, which may involve systemic hormones and immediate, short term retinal responses.
689

Student Satisfaction Surveys and Nonresponse: Ignorable Survey, Ignorable Nonresponse

Boyer, Luc January 2009 (has links)
With an increasing reliance on satisfaction exit surveys to measure how university alumni qualify their experiences during their degree program, it is uncertain whether satisfaction is sufficiently salient, for some alumni, to generate distinguishable satisfaction scores between respondents and nonrespondents. This thesis explores whether, to what extent, and why nonresponse to student satisfaction surveys makes any difference to our understanding of student university experiences. A modified version of Michalos’ multiple discrepancies theory was utilized as the conceptual framework to ascertain which aspects of the student experience are likely to be nonignorable, and which are likely to be ignorable. In recognition of the hierarchical structure of educational organizations, the thesis explores the impact of alumnus and departmental characteristics on nonresponse error. The impact of survey protocols on nonresponse error is also explored. Nonignorable nonresponse was investigated using a multi-method approach. Quantitative analyses were based on a combined dataset gathered by the Graduate Student Exit Survey, conducted at each convocation over a period of three years. These data were compared against basic enrolment variables, departmental characteristics, and the public version of Statistic Canada’s National Graduate Survey. Analyses were conducted to ascertain whether nonresponse is nonignorable at the descriptive and analytical levels (form resistant hypothesis). Qualitative analyses were based on nine cognitive interviews from both recent and soon-to-be alumni. Results were severely weakened by external and internal validity issues, and are therefore indicative but not conclusive. The findings suggest that nonrespondents are different from respondents, satisfaction intensity is weakly related to response rate, and that the ensuing nonresponse error in the marginals can be classified, albeit not fully, as missing at random. The form resistant hypothesis remains unaffected for variations in response rates. Cognitive interviews confirmed the presence of measurement errors which further weakens the case for nonignorability. An inadvertent methodological alignment of response pool homogeneity, a misspecified conceptual model, measurement error (dilution), and a non-salient, bureaucratically-inspired, survey topic are proposed as the likely reasons for the findings of ignorability. Methodological and organizational implications of the results are also discussed.
690

A Defense of Moral Error Theory

Hirsch, Kyle M, Mr. 24 February 2011 (has links)
Richard Joyce claims sincerely uttering moral claims necessarily commits the moral claimant to endorsing false beliefs regarding the predication of nonexistent (non-)natural moral properties. For Joyce, any proposition containing a subject, x, saddled with the predicate “…is moral”[1] will have a truth-value of ‘false’, so long as the predicate fails to refer to anything real in the world. Furthermore, given the philosophical community’s present state of epistemic ignorance, we lack sufficient evidence to justify our endorsement of the existence of (non-)natural moral properties purportedly capable of serving as truth-makers for moral claims. My thesis offers a defense of Joyce’s moral error theory against two different lines of criticisms proffered by Russ Shafer-Landau—one conceptual in nature, and the other ontological. I argue that available evidence compels the informed agnostic about moral truth to suspend judgment on the matter, if not endorse Joyce’s stronger thesis that all moral claims are false.

Page generated in 0.0325 seconds