• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1988
  • 524
  • 512
  • 204
  • 117
  • 91
  • 55
  • 42
  • 35
  • 28
  • 27
  • 18
  • 18
  • 18
  • 18
  • Tagged with
  • 4312
  • 1286
  • 517
  • 516
  • 464
  • 330
  • 315
  • 306
  • 296
  • 291
  • 282
  • 274
  • 271
  • 260
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Generic VLSI architectures : chip designs for image processing applications

Le Riguer, E. M. J. January 2001 (has links)
No description available.
42

Structured test of VLSI arrays

Marnane, William Peter January 1989 (has links)
No description available.
43

A study of the pyramid sensor : analytic theory, simulation and experiment

LeDue, Jeffrey Matthew. 10 April 2008 (has links)
The Pyramid Sensor (PS) is a promising wavefront sensor (WFS) for astronomical adaptive optics (AO) due to its potential to increase the number of accessible scientific targets by more efficiently using guide star (GS) photons. This so-called magnitude gain, as well as the key role played by the PS in several novel multi-reference wavefront sensing schemes have generated intense interest in the device. The diffraction based theory of PS and the underlying optical shop test, the Foucault knife-edge test, is reviewed. The theory is applied to calculate the magnitude gain. The impact of the magnitude gain on the number of galaxies accessible to observation with classical A0 on a TMT sized telescope for the Virgo Cluster Catalogue is assessed via simulations. Additional simulation results are shown to elucidate the impact of various parameters of the pyramidal prism on the magnitude gain. The results of experiments conducted in the UVIC A0 lab with a prototype Id PS are discussed. The Id PS uses a novel optical element called a holographic diffuser to linearize the response of the PS to wavefront tilt. The results of calibrating the sensor are given as well as caveats to the use of such a device. The results of using the Id PS to measure a static aberration as well as spatial and temporal characterization of turbulence produced by the UVIC A0 lab's Hot-Air Turbulence Generator are given.
44

Taphonomic contribution of large mammal butchering experiments to understanding the fossil record

Leenen, Andrea 07 July 2011 (has links)
The primary goal of this project is to create a modern comparative collection of complete large bovid skeletons that record butchery marks made by stone tools. Four different raw materials commonly found in the southern African archaeological record (chert, quartzite, dolerite and hornfels) were selected for flake production. Butchery was conducted on three cows by modern Bushmen subsistence hunters skilled in the processing of animal carcasses. They form part of a relatively isolated group of !Xo-speaking Bushmen resident in the village of Kacgae in the Ghanzi district of western Botswana. The study focuses on characterising the type and conspicuousness of stone-generated butchering marks on bones under low magnification, and documenting patterning including anatomical location, number and orientation. Due to the fact that numerous natural events and human practices modify bones, unequivocal interpretation of bone modifications is sometimes difficult. Further to this, mimics, which are a result of non-human activity, produce the same or qualitatively similar patterns that complicate positive identification of butchery marks made by hominins. Reliable measures are required for interpretation of fossil bone modifications, and controlled actualistic observations provide a direct link between the process of modification (stone tool butchery aimed at complete flesh removal) and the traces produced. A number of taphonomic processes, including bone modification by various animals and geological processes are recorded in comparative collections housed at institutions in the province of Gauteng in the Republic of South Africa. These provide reference material for taphonomists attempting to identify agents responsible for the modification and accumulation of fossil bone assemblages, particularly from early hominin cave sites in the Sterkfontein Valley. However, no reference material exists for hominin modification of bone, and thus motivates for the collection of such traces. The modern comparative collection produced by this study shows butchery marks inflicted exclusively by habitual hunters who are also skilled butchers, and provides a resource for researchers to help accurately identify hominin-produced butchery marks on fossil bones. The accompanying catalogue records the type and conspicuousness, anatomical location and orientation of the butchery marks and provides a controlled sample against which a fossil assemblage can be compared. Results indicate no consistent patterning in the intensity of butchery marking with regard to the type of stone tool material that is utilised. However, a high number of butchery marks per surface area were recorded for most stone tool materials for certain skeletal elements including the mandible, ribs, scapula and humerus. Overall, there are indications that raw material influences butchery marking, however, the small sample size hinders the potential of an identifiable pattern with regard to the type of raw material from which the stone tools responsible for the butchery marks were produced. Furthermore, the vast range of variables that can exist during the butchery process contribute to the equivocal nature of the results. Additional research is required, some of it ongoing, which expands the sample of stone tool butchering, utilises iron tools and investigates ethnographic differences in butchering techniques.
45

Algorithmes d'approximation à mémoire limitée pour le traitement de grands graphes : le problème du Vertex Cover / Approximation algorithms with low memory capacities for large graphs processing : the Vertex Cover problem

Campigotto, Romain 06 December 2011 (has links)
Nous nous sommes intéressés à un problème d'optimisation sur des graphes (le problème du Vertex Cover) dans un contexte bien particulier : celui des grandes instances de données. Nous avons défini un modèle de traitement se basant sur trois contraintes (en relation avec la quantité de mémoire limitée, par rapport à la grande masse de données à traiter) et qui reprenait des propriétés issus de plusieurs modèles existants. Nous avons étudié plusieurs algorithmes adaptés à ce modèle. Nous avons analysé, tout d'abord de façon théorique, la qualité de leurs solutions ainsi que leurs complexités. Nous avons ensuite mené une étude expérimentale sur de gros graphes. De manière générale, les travaux menés durant cette thèse peuvent fournir des indicateurs pour choisir le ou les algorithmes qui conviennent le mieux pour traiter le problème du vertex cover sur de gros graphes. Choisir un algorithme (qui plus est d'approximation) qui soit à la fois performant (en terme de qualité de solution et de complexité) et qui satisfasse les contraintes du modèle que l'on considère est délicat. en effet, les algorithmes les plus performants ne sont pas toujours les mieux adaptés. dans les travaux que nous avons réalisés, nous sommes parvenus à la conclusion qu'il est préférable de choisir au départ l'algorithme qui est le mieux adapté plutôt que de choisir celui qui est le plus performant. / We are interested to an optimization problem on graphs (the Vertex Cover problem) in a very specific context : the huge instances of data. We defined a treatment model based on three constraints (in connection with the limited amount of memory compared to the huge amount of data to be processed) and that reproduces properties from several existing models. We studied several algorithms adapted to this model. We examined, first theoretically, their solutions quality and their complexities. We then conducted an experimental study on large graphs. In general, the work made during this thesis may provide indicators for select algorithms that are best suited to resolve the Vertex Cover problem on large graphs. Choose an algorithm (which is approximated) that is both efficient (in terms of quality of solution and complexity) and satisfies the constraints model whether we consider is tricky. in fact, the most efficient algorithms are not always the best adapted. In the work we have done, we reached the conclusion that, at the beginning, it is best to choose the best suited algorithm rather than the more efficient.
46

Transparent large-page support for Itanium linux

Wienand, Ian Raymond, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
The abstraction provided by virtual memory is central to the operation of modern operating systems. Making the most efficient use of the available translation hardware is critical to achieving high performance. The multiple page-size support provided by almost all architectures promises considerable benefits but poses a number of implementation challenges. This thesis presents a minimally-invasive approach to transparent multiple page-size support for Itanium Linux. In particular, it examines the interaction between supporting large pages and Itanium's two inbuilt hardware page-table walkers; one being a virtual linear page-table with limited support for storing different page-size translations and the other a more flexible but higher overhead hash table based translation cache. Compared to a single-page-size kernel, a range of benchmarks show performance improvements when multiple page-sizes are available, generally large working sets that stress the TLB. However, other benchmarks are negatively impacted. Analysis shows that the increased TLB coverage, resulting from the use of large pages, frequently does not reduce TLB miss rates sufficiently to make up for the increased cost of TLB reloads. These results, which are specific to the Itanium architecture, suggest that large-page support for Itanium Linux is best enabled selectively with insight into application behaviour.
47

Epi-CHO, an episomal expression system for recombinant protein production in CHO cells

Kunaparaju, Raj Kumar, Biotechnology & Biomolecular Sciences, Faculty of Science, UNSW January 2008 (has links)
The current project is to develop a transient expression system for Chinese Hamster Ovary (CHO) cells based on autonomous replication and retention of plasmid DNA. The expression system, named Epi-CHO comprises (1) a recombinant CHO-K1 cell line encoding the Polyoma (Py) virus large T-Antigen (PyLT-Ag), and (2) a DNA expression vector, pPy/EBV encoding the Py Origin (PyOri) for autonomous replication and encoding the Epstein-Barr virus (EBV), Nuclear Antigen-1 (EBNA-1) and EBV Origin of replication (OriP) for plasmid retention. The CHO-K1 cell line expressing PyLT-Ag, named CHO-T was adapted to suspension growth in serum-free media (EXCELL-302) to facilitate large scale transient transfection and recombinant (r) protein production. PyLT-Ag-expressed in CHO-T supported replication of PyOri-containing plasmids and enhanced growth and r- protein production. A scalable cationic lipid based transfection was optimised for CHO-T cells using LipofectAMINE-2000??. Destabilised Enhanced Green Fluorescence Protein (D2EGFP) and Human Growth Hormone (HGH) were used as reporter proteins to demonstrate transgene expression and productivity. Transfection of CHO-T cells with the vector pPy/EBV encoding D2EGFP showed prolonged and enhanced EGFP expression, and transfection with pPy/EBV encoding HGH resulted in a final concentration of 75 mg/L of HGH in culture supernatant 11 days following transfection.
48

The role of the giant Canada goose (Branta canadensis maxima) cecum in nutrition

Garcia, Delia M., January 2006 (has links)
Thesis (Ph.D.)--University of Missouri-Columbia, 2006. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (April 25, 2007) Vita. Includes bibliographical references.
49

Lacome: a cross-platform multi-user collaboration system for a shared large display

Liu, Zhangbo 05 1900 (has links)
Lacome is a multi-user cross-platform system that supports collaboration in a shared large screen display environment. Lacome allows users to share their desktops or application windows using any standard VNC server. It supports multi-user concurrent interaction on the public shared display as well as input redirection so users can control each other's applications. La-come supports separate types of interaction through a Lacome client for window management tasks on the shared display(move, resize, iconify, de-iconify) and for application interactions through the VNC servers. The system architecture provides for Publishers that share information and Navigators that access information. A Lacome client can have either or both, and can initiate additional Publishers on other VNC servers that may not be Lacome clients. Explicit access control policies on both the server side the client side provide a flexible framework for sharing. The architecture builds on standard cross-platform components such as VNC and JRE. Interaction techniques used in the window manager ensure simple and transparent multi-user interactions for managing the shared display space. We illustrate the design and implementation of Lacome and provide insights from initial user experience with the system.
50

Multiple spatial resolution image change detection for environmental management applications

Pape, Alysha Dawn 15 December 2006
Across boreal forests and resource rich areas, human-induced change is rapidly occurring at various spatial scales. In the past, satellite remote sensing has provided a cost effective, reliable method of monitoring these changes over time and over relatively small areas. Those instruments offering high spatial detail, such as Landsat Thematic Mapper or Enhanced Thematic Mapper (TM or ETM+), typically have small swath widths and long repeat times that result in compositing intervals that are too large to resolve accurate time scales for many of these changes. Obtaining multiple scenes and producing maps over very large, forested areas is further restricted by high processing costs and the small window of acquisition opportunity. Coarse spatial resolution instruments such as the Moderate Resolution Imaging Spectroradiometer (MODIS) or the Advanced Very High Resolution Radiometer (AVHRR) typically have short revisit times (days rather than weeks), large swath widths (hundreds of kilometres), and in some cases, hyperspectral resolutions, making them prime candidates for multiple-scale change detection research initiatives. <p>In this thesis, the effectiveness of 250m spatial resolution MODIS data for the purpose of updating existing large-area, 30m spatial resolution Landsat TM land cover map product is tested. A land cover polygon layer was derived by segmentation of Landsat TM data using eCognition 4.0. This polygon layer was used to create a polygon-based MODIS NDVI time series consisting of imagery acquired in 2000, 2001, 2002, 2003, 2004 and 2005. These MODIS images were then differenced to produce six multiple-scale layers of change. Accuracy assessment, based on available GIS data in a subregion of the larger map area, showed an overall accuracy as high as 59% with the largest error associated with change omission (0.51). The Cramers V correlation coefficient (0.38) was calculated using the GIS data. This was compared to the results of an index-based Landsat change detection, Cramers V=0.67. This thesis research showed that areas greater than 15 hectares are adequately represented (approximately 75% accuracy) with the MODIS-based change detection technique. The resulting change information offers potential to identify areas that have been burned or extensively logged, and provides general information on those areas that have experienced greater change and are likely suitable for analysis with higher spatial resolution data.

Page generated in 0.0362 seconds