• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 97
  • 53
  • 32
  • 14
  • 11
  • 8
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 262
  • 58
  • 48
  • 47
  • 45
  • 44
  • 24
  • 23
  • 21
  • 18
  • 17
  • 15
  • 15
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Parametric analysis of a small circular phased-array radar antenna

Williamsen, Erik Martin 30 April 2011 (has links)
Presented is a study and analysis of a small circular phased array antenna designed to achieve narrow beamwidths, while maintaining full 360 degree area coverage with no moving parts, small size, inexpensive construction, and minimal power usage. Existing methods of producing narrow beamwidths using patch based design are analyzed for their applicability given the design restrictions. A mathematical construct of the system is then developed and used to help generate a design that is then simulated using Ansoft's HFSS electromagnetic simulation program. Various parameters are varied including overall radius and excitation element design and the results are analyzed against the desired response. The best results are then presented for future consideration.
2

The structure and phase equilibria of fluids in confined systems

Ball, P. C. January 1988 (has links)
No description available.
3

Code optimizations for narrow bitwidth architectures

Bhagat, Indu 23 February 2012 (has links)
This thesis takes a HW/SW collaborative approach to tackle the problem of computational inefficiency in a holistic manner. The hardware is redesigned by restraining the datapath to merely 16-bit datawidth (integer datapath only) to provide an extremely simple, low-cost, low-complexity execution core which is best at executing the most common case efficiently. This redesign, referred to as the Narrow Bitwidth Architecture, is unique in that although the datapath is squeezed to 16-bits, it continues to offer the advantage of higher memory addressability like the contemporary wider datapath architectures. Its interface to the outside (software) world is termed as the Narrow ISA. The software is responsible for efficiently mapping the current stack of 64-bit applications onto the 16-bit hardware. However, this HW/SW approach introduces a non-negligible penalty both in dynamic code-size and performance-impact even with a reasonably smart code-translator that maps the 64- bit applications on to the 16-bit processor. The goal of this thesis is to design a software layer that harnesses the power of compiler optimizations to assuage this negative performance penalty of the Narrow ISA. More specifically, this thesis focuses on compiler optimizations targeting the problem of how to compile a 64-bit program to a 16-bit datapath machine from the perspective of Minimum Required Computations (MRC). Given a program, the notion of MRC aims to infer how much computation is really required to generate the same (correct) output as the original program. Approaching perfect MRC is an intrinsically ambitious goal and it requires oracle predictions of program behavior. Towards this end, the thesis proposes three heuristic-based optimizations to closely infer the MRC. The perspective of MRC unfolds into a definition of productiveness - if a computation does not alter the storage location, it is non-productive and hence, not necessary to be performed. In this research, the definition of productiveness has been applied to different granularities of the data-flow as well as control-flow of the programs. Three profile-based, code optimization techniques have been proposed : 1. Global Productiveness Propagation (GPP) which applies the concept of productiveness at the granularity of a function. 2. Local Productiveness Pruning (LPP) applies the same concept but at a much finer granularity of a single instruction. 3. Minimal Branch Computation (MBC) is an profile-based, code-reordering optimization technique which applies the principles of MRC for conditional branches. The primary aim of all these techniques is to reduce the dynamic code footprint of the Narrow ISA. The first two optimizations (GPP and LPP) perform the task of speculatively pruning the non-productive (useless) computations using profiles. Further, these two optimization techniques perform backward traversal of the optimization regions to embed checks into the nonspeculative slices, hence, making them self-sufficient to detect mis-speculation dynamically. The MBC optimization is a use case of a broader concept of a lazy computation model. The idea behind MBC is to reorder the backslices containing narrow computations such that the minimal necessary computations to generate the same (correct) output are performed in the most-frequent case; the rest of the computations are performed only when necessary. With the proposed optimizations, it can be concluded that there do exist ways to smartly compile a 64-bit application to a 16- bit ISA such that the overheads are considerably reduced. / Esta tesis deriva su motivación en la inherente ineficiencia computacional de los procesadores actuales: a pesar de que muchas aplicaciones contemporáneas tienen unos requisitos de ancho de bits estrechos (aplicaciones de enteros, de red y multimedia), el hardware acaba utilizando el camino de datos completo, utilizando más recursos de los necesarios y consumiendo más energía. Esta tesis utiliza una aproximación HW/SW para atacar, de forma íntegra, el problema de la ineficiencia computacional. El hardware se ha rediseñado para restringir el ancho de bits del camino de datos a sólo 16 bits (únicamente el de enteros) y ofrecer así un núcleo de ejecución simple, de bajo consumo y baja complejidad, el cual está diseñado para ejecutar de forma eficiente el caso común. El rediseño, llamado en esta tesis Arquitectura de Ancho de Bits Estrecho (narrow bitwidth en inglés), es único en el sentido que aunque el camino de datos se ha estrechado a 16 bits, el sistema continúa ofreciendo las ventajas de direccionar grandes cantidades de memoria tal como procesadores con caminos de datos más anchos (64 bits actualmente). Su interface con el mundo exterior se denomina ISA estrecho. En nuestra propuesta el software es responsable de mapear eficientemente la actual pila software de las aplicaciones de 64 bits en el hardware de 16 bits. Sin embargo, esta aproximación HW/SW introduce penalizaciones no despreciables tanto en el tamaño del código dinámico como en el rendimiento, incluso con un traductor de código inteligente que mapea las aplicaciones de 64 bits en el procesador de 16 bits. El objetivo de esta tesis es el de diseñar una capa software que aproveche la capacidad de las optimizaciones para reducir el efecto negativo en el rendimiento del ISA estrecho. Concretamente, esta tesis se centra en optimizaciones que tratan el problema de como compilar programas de 64 bits para una máquina de 16 bits desde la perspectiva de las Mínimas Computaciones Requeridas (MRC en inglés). Dado un programa, la noción de MRC intenta deducir la cantidad de cómputo que realmente se necesita para generar la misma (correcta) salida que el programa original. Aproximarse al MRC perfecto es una meta intrínsecamente ambiciosa y que requiere predicciones perfectas de comportamiento del programa. Con este fin, la tesis propone tres heurísticas basadas en optimizaciones que tratan de inferir el MRC. La utilización de MRC se desarrolla en la definición de productividad: si un cálculo no altera el dato que ya había almacenado, entonces no es productivo y por lo tanto, no es necesario llevarlo a cabo. Se han propuesto tres optimizaciones del código basadas en profile: 1. Propagación Global de la Productividad (GPP en inglés) aplica el concepto de productividad a la granularidad de función. 2. Poda Local de Productividad (LPP en inglés) aplica el mismo concepto pero a una granularidad mucho más fina, la de una única instrucción. 3. Computación Mínima del Salto (MBC en inglés) es una técnica de reordenación de código que aplica los principios de MRC a los saltos condicionales. El objetivo principal de todas esta técnicas es el de reducir el tamaño dinámico del código estrecho. Las primeras dos optimizaciones (GPP y LPP) realizan la tarea de podar especulativamente las computaciones no productivas (innecesarias) utilizando profiles. Además, estas dos optimizaciones realizan un recorrido hacia atrás de las regiones a optimizar para añadir chequeos en el código no especulativo, haciendo de esta forma la técnica autosuficiente para detectar, dinámicamente, los casos de fallo en la especulación. La idea de la optimización MBC es reordenar las instrucciones que generan el salto condicional tal que las mínimas computaciones que general la misma (correcta) salida se ejecuten en la mayoría de los casos; el resto de las computaciones se ejecutarán sólo cuando sea necesario.
4

Turbulent velocity profiles : a new law for narrow channels

Pu, Jaan H., Bonakdari, H., Lassabatere, L., Joannis, C., Larrarte, F. 07 1900 (has links)
No / The determination of velocity profiles in turbulent narrow open channels is a difficult task due to the significant effects of the anisotropic turbulence that drives the Prandtl’s second kind of secondary flow in the cross section. Due to these currents the maximum velocity appears below the free surface. This is called the dip phenomenon. The classical log law describes the velocity distribution in the inner region of the turbulent boundary layer. The Coles law and its wake function are not able to predict the velocity profile in the outer region of narrow channels. This paper relies on an analysis of the Navier-Stokes equations and yields a new formulation of the vertical velocity profile in the outer region of the boundary layer in the central cross section area of steady, fully developed turbulent flows in open channels. This formulation is able to predict primary velocity profiles for both narrow and wide open channels. This new law is a modification of the classical one, it involves an additional parameter CAr that is a function of the position of the maximum velocity ξdip and roughness height (kS).ξdip may be derived either from measurements or from an empirical equation given in this paper. A wide range of longitudinal velocity profile data for narrow open channel has been used for validating the new law. The agreement between the experimental data and the profile given by the law is very good, despite the simplification used.
5

Agronomic and Economic Evaluation of Ultra Narrow Row Cotton Production in Arizona in 1999

Husman, S. H., McCloskey, W. B., Teegerstrom, T., Clay, P. A. January 2000 (has links)
An experiment was conducted at the University of Arizona Maricopa Agricultural Center, Maricopa, Arizona in 1999 to compare and evaluate agronomic and economic differences between Ultra Narrow Row (UNR) and conventional cotton row spacing systems with respect to yield, fiber quality, earliness potential, plant growth and development, and production costs. Row spacing was 10 and 40 inches for the UNR and conventional systems, respectively. Two varieties were evaluated within each row spacing, Sure Grow 747 (SG 747) and Delta Pine 429RR (DP 429RR). Lygus populations were extremely high in the Maricopa, Arizona region in 1999 which resulted in poor fruit retention from early through mid-season. As a result of poor boll load through mid-season, the UNR plots were irrigated and grown later into the season than desired along with the conventional cotton in order to set and develop a later season boll load. The mean lint yield averaged across row spacing was significantly greater (P=0.05) in the UNR row spacing at 1334 lb/A than for the conventional row spacing at 1213 lb/A. SG 747 produced 1426 and 1337lb/A of lint in the UNR and conventional systems, respectively. DP 429RR produced 1242 and 1089 lb/A of lint in the UNR and conventional systems respectively. Fiber grades were all 21 or 31 in both UNR and conventional systems. Micronaire was 4.9 or less in both varieties within the UNR system. Micronaire was high at 5.3 in the conventionally produced SG 747 resulting in discount but was acceptable at 4.7 in the conventionally produced DP 429RR. Length and strength measurements met base standards in all cotton variety and row spacing combinations. Neither the conventional or the UNR cotton production systems were profitable due primarily to high chemical insect control costs and early season boll loss. However, UNR production costs were lower by $0.09 per pound than in the conventional system on a cash cost basis and $0.14 per pound lower when considering total costs including variable and ownership costs.
6

Weed Control in Arizona Ultra Narrow Row Cotton: 1999 Preliminary Results

McCloskey, William B., Clay, Patrick A., Husman, Stephen H. January 2000 (has links)
In two 1999 Arizona studies, a preplant incorporated (PPI) application of Prowl (2.4 pt/A) or Treflan (0.75 lb a.i./A) followed by a topical Roundup Ultra (1 qt/A) application at the 3 to 4 true leaf cotton growth stage provided good weed control. At the University of Arizona Maricopa Agricultural Center field that had low density weed populations, a postemergence topical Staple (1.8 oz/A) application also provided good weed control was more expensive. At the Buckeye, Arizona study site, a PPI application of Prowl at a reduced rate (1.2 pt/A) was as effective as the full rate (2.4 pt/A) but a preemergence application of Prowl (2.4 pt/A) was not as effective as either of the PPI Prowl rates or PPI Treflan (0.75 lb a.i./A). A postemergence topical Staple application (1.8 oz/A) following the Roundup Ultra application did not significantly improve weed control. After one field season of experimentation and observation in Arizona UNR cotton, experience suggests that in fields with low to moderate weed populations, a PPI Prowl or Treflan application followed by a postemergence topical Roundup Ultra application will provide acceptable weed control in most fields. However, the presence of nutsedge or other difficult to control weeds may require two postemergence topical Roundup Ultra application prior to the four leaf growth stage of cotton. More research is needed to further explore weed control options in Arizona UNR cotton production systems.
7

Preliminary Investigations in Ultra-narrow Row Cotton, Safford Agricultural Center, 1999

Clark, L. J., Carpenter, E. W. January 2000 (has links)
A preliminary investigation was made in Ultra-narrow row cotton production on the Safford Agricultural Center to see if there was any promise in that technology for cotton producers in the high deserts of Arizona. Increases in plant populations to near 100,000 plants per acre in single lines, double lines and quadruple lines per bed were the goals of the study. In-season plant mapping to evaluate differences in plant growth characteristics were done along with yield measurements to evaluate differences. Yield increases were not seen with increases in plant populations in single row plantings nor in multiple row plantings.
8

Evaluation of Commerical Ultra Narrow Cotton Production in Arizona

Clay, P. A., Isom, L. D., McCloskey, W. B., Husman, S. H. January 2000 (has links)
Seven commercial ultra narrow row (UNR) cotton fields were monitored on a weekly basis in Maricopa County, AZ in 1999. Varieties of Delta Pine and Sure Grow were planted from April 15 to June 1 and reached cut-out after accumulating 1913 to 2327 heat units after planting. Average yield for UNR cotton was 2.1 bales per acre which was 0.4 bales per acre lower than the five year average for cotton planted on conventional row spacings. Fiber quality from gin records for 801 bales had average micronaire readings of 4.54 and grades of 11 and 21 for 74% of bales. Discounts for extraneous matter (bark, grass, and cracked seed) was 5.4% and average strength (34.8) and staple lengths (27.12) were in acceptable ranges. Total cash costs ranged from $450 to $705.
9

The use and realisation of accentual focus in Central Catalan with a comparison to English

Estebas-Vilaplana, Eva January 2000 (has links)
No description available.
10

Passive Hallux Adduction Decreases Blood Flow to Plantar Fascia

Dunbar, Julia Lorene 01 July 2018 (has links)
Purpose: Due to the vital role that blood flow plays in maintaining tissue health, compromised blood flow can prevent effective tissue healing. An adducted hallux, as often seen inside a narrow shoe, may put passive tension on the abductor hallucis, consequently compressing the lateral plantar artery (LPA) into the calcaneus and thus restricting blood flow. The purpose of this study was to compare blood flow within the LPA before and after passive hallux adduction (PHA). Methods: Forty-five healthy volunteers (20 female, 25 male; age = 24.8 ± 6.8 yr; height = 1.7 ± 0.1 m; weight = 73.4 ± 13.5 kg) participated in this study. Blood velocity and vessel diameter measurements were obtained using ultrasound imaging (L8-18i transducer, GE Logiq S8). The LPA was imaged deep to abductor hallucis for 120 seconds: 60 seconds at rest followed by 60 seconds of PHA. Maximal PHA was performed by applying pressure to the medial side of the proximal phalanx of the hallux. Blood flow was then calculated in mL/min, and pre-PHA blood flow was compared to blood flow during PHA. Results: Log transformed data was used to run a paired t-test between the preadduction and postadduction blood flow. The volume of blood flow was 22.2% lower after PHA compared to before (–0.250 ± 0.063, p < 0.001). Conclusion: Although PHA is only a simulation of what would happen to the hallux inside of a narrow shoe, our preliminary findings of decreased blood flow through PHA suggest blood flow in narrow footwear and its effects on tissues within the foot are worth investigating.

Page generated in 0.0388 seconds