• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1927
  • 598
  • 576
  • 417
  • 240
  • 177
  • 57
  • 54
  • 45
  • 26
  • 26
  • 25
  • 24
  • 23
  • 20
  • Tagged with
  • 4822
  • 533
  • 504
  • 499
  • 433
  • 421
  • 376
  • 362
  • 354
  • 346
  • 340
  • 337
  • 320
  • 319
  • 317
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
801

Testing the functional equivalence of the mammalian Dlx5 and Dlx6 proteins

Quach, Anna 11 January 2013 (has links)
The Distal-less (Dll) gene has an ancient evolutionary origin. Chordates have retained duplicated Dll genes; vertebrates have six distinct paralogues (Dlx1 through Dlx6 in mammals) arranged in three cis-linked pairs that are co-expressed. Dlx genes are expressed in a conserved nested pattern that defines a proximal-distal axis in the pharyngeal arch tissue of vertebrates. Dlx5-/- and Dlx6-/- mouse neonates have similar phenotypic variations in the lower jaw and inner ear bones, with the Dlx6-/- phenotype being a less perturbed version of the Dlx5-/- phenotype. Conversely, Dlx5/6-/- double mutants have a homeotic transformation of the lower jaw into a second set of maxillary structures. The combination of expression patterns and null phenotypes has led to the proposal of a “Dlx code” that patterns the craniofacial tissue. However, the nature of this code, whether individual Dlx transcription factors supply unique functions, or whether they make a quantitative contribution to a more generic and shared Dlx function, is not well understood. One prediction of a quantitative model for Dlx function in the pharyngeal arches is the functional equivalency of the proteins encoded by divergent cis-linked Dlx paralogues. To address this aspect of the model, three core functions of Dlx5 and Dlx6 were compared quantitatively: suppression of cell growth, transcription activity and DNA binding affinity. In most respects both proteins behaved very similarly.
802

Les crimes de cruauté contre les animaux: examen des propositions du législateur

Giroux, Valéry 01 1900 (has links)
Suite à une consultation publique, la ministre de la Justice Anne McLellan a voulu répondre aux groupes et aux citoyens en intégrant, dans le projet de loi omnibus C-17, des amendements visant le droit criminel quant à la cruauté envers les animaux. Le projet de loi étant devenu caduc aux dernières élections fédérales, ces nouvelles dispositions étaient reprises par le projet de loi C-15 qui a lui-même été prorogé à la fin de la première session de la 37 législature. Les modifications qu'il proposait sont intégralement reprises par le projet de loi C-IO, déposé en octobre 2002. Les membres du Sénat renvoyaient récemment ce dernier devant le Comité permanent des affaires juridiques et constitutionnelles, chargé de le scinder en deux afin qu'un nouveau projet de loi, le ClOB, soit bientôt déposé et porte exclusivement sur la cruauté envers les animaux. Devant l'imminence de telles modifications, il est intéressant de se questionner autant sur leur contexte que sur leur portée. Dans une première partie, nous présentons quelques éléments du contexte philosophique et sociétal justifiant l'intérêt grandissant pour la question animale. L'émergence de nouvelles théories morales accordant plus de valeur à l'animal, les critiques et revendications des groupes de pression et des citoyens en général, ainsi que les récentes études démontrant un lien entre la cruauté envers les animaux et la violence dirigée contre l'homme, exigent une remise en question des rapports homme/animal. Une révision de ces différents facteurs contextuels permet de mieux comprendre à quoi répondent ces projets de loi. Dans une deuxième partie, nous relevons plusieurs incohérences du droit actuel afin d'identifier ce qui devrait être modifié par le législateur. Les incohérences sont à plusieurs niveaux: cohabitation de lois visant à protéger l'animal défini comme un être sensible et de lois portant sur la gestion des animaux considérés comme de simples ressources utiles à l'homme; intégration d'infractions visant à protéger l'animal dans la section du Code criminel portant sur les biens; gravité relativement importante des crimes contre les animaux par comparaison à certaines infractions contre la personne; problèmes 11 liés au libellé des infractions particulières et distinctions quant au degré de protection des animaux en fonction de leur intérêt pour 1'homme. Ensuite, le droit proposé sera examiné pour vérifier s'il règlera ces problèmes. Retirées de la partie concernant les biens, les infractions porteront davantage sur la sensibilité de l'animal plutôt que vers son utilité pour l'homme. Au niveau des régimes de responsabilité, l'ambiguïté constitutionnelle entraînée par la présomption du paragraphe 446(3) C.cr. sera évacuée. Quant aux peines, le durcissement prévu risque de rendre les infractions démesurément sévères par comparaison à certains crimes contre la personne. Bien qu'actualisées, les infractions seront toujours nombreuses et anecdotiques. Finalement, les changements de nature strictement juridique sont surtout cosmétiques, alors que la valeur symbolique des projets de loi est, quant à elle, plus significative. En effet, si quelques considérations anthropocentriques sont à l'origine des projets de loi, une reconnaissance de la valeur intrinsèque de l'animal semble aussi les avoir inspirés. Malheureusement, le paradigme de l'animal proposé, encore plus que celui qui est actuellement reconnu, se concilie difficilement avec l'utilisation des animaux pour les fins égocentriques de l'homme
803

Dynamic Adaptive Multimesh Refinement for Coupled Physics Equations Applicable to Nuclear Engineering

Dugan, Kevin 16 December 2013 (has links)
The processes studied by nuclear engineers generally include coupled physics phenomena (Thermal-Hydraulics, Neutronics, Material Mechanics, etc.) and modeling such multiphysics processes numerically can be computationally intensive. A way to reduce the computational burden is to use spatial meshes that are optimally suited for a specific solution; such meshes are obtained through a process known as Adaptive Mesh Refinement (AMR). AMR can be especially useful for modeling multiphysics phenomena by allowing each solution component to be computed on an independent mesh (Multimesh AMR). Using AMR on time dependent problems requires the spatial mesh to change in time as the solution changes in time. Current algorithms presented in the literature address this concern by adapting the spatial mesh at every time step, which can be inefficient. This Thesis proposes an algorithm for saving computational resources by using a spatially adapted mesh for multiple time steps, and only adapting the spatial mesh when the solution has changed significantly. This Thesis explores the mechanisms used to determine when and where to spatially adapt for time dependent, coupled physics problems. The algorithm is implemented using the Deal.ii fiinite element library [1, 2], in 2D and 3D, and is tested on a coupled neutronics and heat conduction problem in 2D. The algorithm is shown to perform better than a uniformly refined static mesh and, in some cases, a mesh that is spatially adapted at every time step.
804

Functionality based refactoring : improving source code comprehension

Beiko, Jeffrey Lee 02 January 2008 (has links)
Software maintenance is the lifecycle activity that consumes the greatest amount of resources. Maintenance is a difficult task because of the size of software systems. Much of the time spent on maintenance is spent trying to understand source code. Refactoring offers a way to improve source code design and quality. We present an approach to refactoring that is based on the functionality of source code. Sets of heuristics are captured as patterns of source code. Refactoring opportunities are located using these patterns, and dependencies are verified to check if the located refactorings preserve the dependencies in the source code. Our automated tool performs the functional-based refactoring opportunities detection process, verifies dependencies, and performs the refactorings that preserve dependencies. These refactorings transform the source code into a series of functional regions of code, which makes it easier for developers to locate code they are searching for. This also creates a chunked structure in the source code, which helps with bottom-up program comprehension. Thus, this process reduces the amount of time required for maintenance by reducing the amount of time spent on program comprehension. We perform case studies to demonstrate the effectiveness of our automated approach on two open source applications. / Thesis (Master, Computing) -- Queen's University, 2007-10-05 12:48:56.977
805

Generation of Concurrency Controls using Discrete-Event Systems

Dragert, Christopher 27 September 2008 (has links)
The development of controls for the execution of concurrent code is non-trivial. This work shows how existing discrete-event system (DES) theory can be successfully applied to this problem. From code without concurrency controls and a specification of desired behaviours, a DES representation of the problem is obtained, and then used to generate concurrency control code. By applying rigorously proven DES theory, the resulting code comes with guarantees not present in similar works. All control schemes generated in DES are nonblocking, yielding code that is free of both livelock and deadlock. Additionally, the generated control scheme is minimally restrictive, meaning only problematic behaviours are prevented. If the specifications cannot be enforced as presented, the largest controllable subset is instead enforced. The result, which requires no further interaction to generate, is the best possible control scheme given the interaction between the specifications and the original code. Existing methods encounter difficulties when faced with multiple specifications that interact to form deadlocks. Modular DES theory is successfully applied, allowing resolution of these conflicts without requiring the user to introduce new specifications. Moreover, the approach is independent of specific programming or specification languages. A Java implementation is given, along with two problems showing the process in action. / Thesis (Master, Computing) -- Queen's University, 2008-09-25 09:03:51.593
806

TECHNIQUES FOR IMPROVING SOFTWARE DEVELOPMENT PROCESSES BY MINING SOFTWARE REPOSITORIES

Dhaliwal, Tejinder 08 September 2012 (has links)
Software repositories such as source code repositories and bug repositories record information about the software development process. By analyzing the rich data available in software repositories, we can uncover interesting information. This information can be leveraged to guide software developers, or to automate software development activities. In this thesis we investigate two activities of the development process: selective code integration and grouping of field crash-reports, and use the information available in software repositories to improve each of the two activities. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2012-09-04 12:26:59.388
807

Power and identity: negotiation through code-switching in the Swiss German classroom

Kidner, Keely Unknown Date
No description available.
808

Language indexation : a syntactic constraint on code-mixing

Miller, Amanda January 1993 (has links)
Code-mixing, defined as intra-sentential language alternation, is known to demonstrate structurally determined patterns of restriction. Universal constraints have been proposed to account for these structural restrictions (Poplack (1980), Woolford (1983), Di Sciullo, Muysken and Singh (1986)) but have had limited success in accounting for code-mixing between typologically diverse languages. This thesis examines the structural principles that apply universally to the interaction of languages in code-mixed sentences. We argue that systematic cross-linguistic restrictions on code-mixing can be accounted for by a syntactic constraint that is sensitive to the distinction between functional and lexical categories. / We propose the constraint of Language Indexation, according to which (structurally) adjacent categories of like functional/lexical category status must be realised in the same language. We show how this proposal accounts for code-mixed data from a range of language pairs, including Tagalog/English, Moroccan Arabic/French, Swahili/English, Irish/English, Hindi/English, Spanish/English and French/English. A difference in the application of Language Indexation in nominal versus verbal projections is discussed with reference to Tagalog/English and Moroccan Arabic/French code-mixing. Finally, we briefly examine the implications of Language Indexation with respect to the code-mixing of aphasic bilinguals.
809

The ICD-10 coding system in chiropractic practice and the factors influencing compliancy

Pieterse, Riaan January 2009 (has links)
A dissertation presented to the Faculty of Health, Durban University of Technology, for the Masters Degree in Technology: Chiropractic, 2009. / Background: The International Classification of Diseases (ICD) provides codes to classify diseases in such a manner, that every health condition is assigned to a unique category. Some of the most common diagnoses made by chiropractors are not included in the ICD-10 coding system, as it is mainly medically orientated and does not accommodate these diagnoses. This can potentially lead to reimbursement problems for chiropractors in future and create confusion for medical aid schemes as to what conditions chiropractors actually diagnose and treat. Aim: To determine the level of compliancy of chiropractors, in South Africa, to the ICD-10 coding procedure and the factors that may influence the use of correct ICD-10 codes. As well as to determine whether the ICD-10 diagnoses chiropractors commonly submit to the medical aid schemes, reflect the actual diagnoses made in practice. Method: The study was a retrospective survey of a quantitative nature. A self-administered questionnaire was e-mailed and posted to 380 chiropractors, practicing in South Africa. The electronic questionnaires were sent out four times at two week intervals for the duration of eight weeks; and the postal questionnaires sent once. A response rate of 16.5% (n = 63) was achieved. Raw data was received from the divisional manager of the coding unit of Discovery Health (Pty) Ltd. in the form of an excel spreadsheet containing the most common ICD-10 diagnoses made by chiropractors in South Africa, for the period June 2006 to July 2007, who had submitted claims to the Medical Scheme. The spreadsheet also contained depersonalised compliance statistics of chiropractors to the ICD-10 system from July 2006 to October 2008. SPSS version 15 was used for descriptive statistical data analysis (SPSS Inc., Chicago, Ill, USA). Results: The age range of the 63 participants who responded to the questionnaire was 26 to 79 years, with an average of 41 years. The majority of the participants were male (74.6%, n = 47). KwaZulu-Natal had 25 participants (39.6%), Gauteng 17 (26.9%), Western Cape 12 (19%), Eastern Cape four (6.3%), Free State and Mpumalanga two (3.1%) each and North West one (1.5%). The mean knowledge score for ICD-10 coding was 43.5%, suggesting a relatively low level of knowledge. The total percentage of mistakes for electronic claims was higher for both the primary and unlisted claims (3.93% and 2.18%), than for manual claims iv (1.57% and 1.59%). The total percentage of mistakes was low but increased marginally each year for both primary claims (1.43% in 2006; 1.99% in 2007; 2.33% in 2008) and unlisted claims (0% in 2006; 2.61% in 2007; 3.07% in 2008). CASA members were more likely to be aware of assistance offered, in terms of ICD-10 coding through the medical schemes and the association (p = 0.131), than non-members. There was a non-significant trend towards participants who had been on an ICD-10 coding course (47.6%; n = 30), having a greater knowledge of the ICD-10 coding procedures (p = 0.147). Their knowledge was almost 10% higher than those who had not been on a course (52.4%; n = 33). Most participants (38.1%; n = 24) did not use additional cause codes when treating cases of musculoskeletal trauma, nor did they use multiple codes (38.7%; n = 24) when treating more than one condition in the same patient. Nearly 70% of participants (n = 44) used the M99 code in order to code for vertebral subluxation and the majority (79.4%; n = 50) believed the definition of subluxation used in ICD-10 coding to be the same as that which chiropractors use to define subluxation. According to the medical aid data, the top five diagnoses made by chiropractors from 2006 to 2007 were: Low back pain, lumbar region, M54.56 (8996 claims); Cervicalgia, M54.22 (6390 claims); Subluxation complex, cervical region, M99.11 (2895 claims); Other dorsalgia, multiple sites in spine, M54.80 (1524 claims) and Subluxation complex, sacral region, M99.14 (1293 claims). According to the questionnaire data, the top five diagnoses (Table 4.24) were: Lumbar facet syndrome, M54.56 (25%); Lumbar facet syndrome, M99.13 (23.3%); Cervical facet syndrome, M99.11 (21.7%); Cervicogenic headache, G44.2 (20%) and Cervicalgia, M54.22 (20%). Conclusion: The sample of South African chiropractors were fairly compliant to the ICD-10 coding system. Although the two sets of data (i.e. from the medical aid scheme and the questionnaire) regarding the diagnoses that chiropractors make on a daily basis correlate well with each other, there is no consensus in the profession as to which codes to use for chiropractic specific diagnoses. These chiropractic specific diagnoses (e.g. facet syndrome) are however, the most common diagnoses made by chiropractors in private practice. Many respondents indicated that because of this they sometimes use codes that they know will not be rejected, even if it is the incorrect code. For more complicated codes, the majority of respondents indicated that they did not know how to or were not interested in submitting the correct codes to comply with the level of specificity required by the medical aid schemes. The challenge is to make practitioners aware of the advantages of correct coding for the profession.
810

Multiuser detection employing recurrent neural networks for DS-CDMA systems.

January 2006 (has links)
Over the last decade, access to personal wireless communication networks has evolved to a point of necessity. Attached to the phenomenal growth of the telecommunications industry in recent times is an escalating demand for higher data rates and efficient spectrum utilization. This demand is fuelling the advancement of third generation (3G), as well as future, wireless networks. Current 3G technologies are adding a dimension of mobility to services that have become an integral part of modem everyday life. Wideband code division multiple access (WCDMA) is the standardized multiple access scheme for 3G Universal Mobile Telecommunication System (UMTS). As an air interface solution, CDMA has received considerable interest over the past two decades and a great deal of current research is concerned with improving the application of CDMA in 3G systems. A factoring component of CDMA is multiuser detection (MUD), which is aimed at enhancing system capacity and performance, by optimally demodulating multiple interfering signals that overlap in time and frequency. This is a major research problem in multipoint-to-point communications. Due to the complexity associated with optimal maximum likelihood detection, many different sub-optimal solutions have been proposed. This focus of this dissertation is the application of neural networks for MUD, in a direct sequence CDMA (DS-CDMA) system. Specifically, it explores how the Hopfield recurrent neural network (RNN) can be employed to give yet another suboptimal solution to the optimization problem of MUD. There is great scope for neural networks in fields encompassing communications. This is primarily attributed to their non-linearity, adaptivity and key function as data classifiers. In the context of optimum multiuser detection, neural networks have been successfully employed to solve similar combinatorial optimization problems. The concepts of CDMA and MUD are discussed. The use of a vector-valued transmission model for DS-CDMA is illustrated, and common linear sub-optimal MUD schemes, as well as the maximum likelihood criterion, are reviewed. The performance of these sub-optimal MUD schemes is demonstrated. The Hopfield neural network (HNN) for combinatorial optimization is discussed. Basic concepts and techniques related to the field of statistical mechanics are introduced and it is shown how they may be employed to analyze neural classification. Stochastic techniques are considered in the context of improving the performance of the HNN. A neural-based receiver, which employs a stochastic HNN and a simulated annealing technique, is proposed. Its performance is analyzed in a communication channel that is affected by additive white Gaussian noise (AWGN) by way of simulation. The performance of the proposed scheme is compared to that of the single-user matched filter, linear decorrelating and minimum mean-square error detectors, as well as the classical HNN and the stochastic Hopfield network (SHN) detectors. Concluding, the feasibility of neural networks (in this case the HNN) for MUD in a DS-CDMA system is explored by quantifying the relative performance of the proposed model using simulation results and in view of implementation issues. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2006.

Page generated in 0.0717 seconds