• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 698
  • 169
  • 90
  • 71
  • 64
  • 43
  • 35
  • 24
  • 22
  • 21
  • 18
  • 10
  • 6
  • 6
  • 5
  • Tagged with
  • 1512
  • 144
  • 131
  • 128
  • 124
  • 114
  • 113
  • 96
  • 92
  • 89
  • 82
  • 78
  • 75
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Working for the competition : an analysis of the local news pool

West, Katharine Elizabeth 12 July 2012 (has links)
The Local News Pool or “LNP” as it’s referred to concerns competing television news stations within a single market forming a cooperative partnership in order to share content such as video and interviews. This study used depth interviews with assignment editors, producers, and photographers in Austin, Texas, Tampa, Florida, and Denver, Colorado, and incorporated a quantitative content analysis of news stories assigned to the LNP in Austin to discover how this convergence model operates, its effects on workers, and the potential for homogenization. This study discovered that by incorporating this convergence model into a newsroom and categorizing certain stories as “shared” it altered the level of importance photographers and producers placed on the story. By these journalists having prior knowledge that the competition might use or have an interest in a certain story, it altered the value placed on it to one of something “extra” or “filler” and not highly desired for the news broadcast. In addition, this study provides an updated look at the multilayer gatekeeping process by incorporating competing organizations within this decision making process. This study found gatekeepers cooperate on stories each find would produce similar content if their station were to send their own crews. The aspect of competition becomes present when gatekeepers request stories not intended for the LNP such as breaking news. The level of cooperation is often based on ratings and perception of one’s willingness to reciprocate if needed. / text
402

The size and depth of Boolean circuits

Jang, Jing-Tang Keith 27 September 2013 (has links)
We study the relationship between size and depth for Boolean circuits. Over four decades, very few results were obtained for either special or general Boolean circuits. Spira showed in 1971 that any Boolean formula of size s can be simulated in depth O(log s). Spira's result means that an arbitrary Boolean expression can be replaced by an equivalent "balanced" expression, that can be evaluated very efficiently in parallel. For general Boolean circuits, the strongest known result is that Boolean circuits of size s can be simulated in depth O(s / log s). We obtain significant improvements over the general bounds for the size versus depth problem for special classes of Boolean circuits. We show that every layered Boolean circuit of size s can be simulated by a layered Boolean circuit of depth O(sqrt{s log s}). For planar circuits and synchronous circuits of size s, we obtain simulations of depth O(sqrt{s}). Improving any of the above results by polylog factors would immediately improve the bounds for general circuits. We generalize Spira's theorem and show that any Boolean circuit of size s with segregators of size f(s) can be simulated in depth O(f(s)log s). This improves and generalizes a simulation of polynomial-size Boolean circuits of constant treewidth k in depth O(k² log n) by Jansen and Sarma. Since the existence of small balanced separators in a directed acyclic graph implies that the graph also has small segregators, our results also apply to circuits with small separators. Our results imply that the class of languages computed by non-uniform families of polynomial size circuits that have constant size segregators equals non-uniform NC¹. As an application of our simulation of circuits in small depth, we show that the Boolean Circuit Value problem for circuits with constant size segregators (or separators) is in deterministic SPACE (log² n). Our results also imply that the Planar Circuit Value problem, which is known to be P-Complete, is in SPACE (sqrt{n} log n). We also show that the Layered Circuit Value and Synchronous Circuit Value problems, which are both P-complete, are in SPACE(sqrt{n}). Our study of circuits with small separators and segregators led us to obtain space efficient algorithms for computing balanced graph separators. We extend this approach to obtain space efficient approximation algorithms for the search and optimization versions of the SUBSET SUM problem, which is one of the most studied NP-complete problems. Finally we study the relationship between simultaneous time and space bounds on Turing machines and Boolean circuit depth. We observe a new connection between planar circuit size and simultaneous time and space products of input-oblivious Turing machines. We use this to prove quadratic lower bounds on the product of time and space for several explicit functions for input-oblivious Turing machines. / text
403

Post-permeation stability of modified bentonite suspensions under increasing hydraulic gradients

El-Khattab, May Mohammad 05 November 2013 (has links)
Slurry wall is a geotechnical engineering application to control the migration of contaminants by retarding groundwater flow. Sand-bentonite slurry walls are commonly used as levees and containment liners. The performance of bentonite slurry in sand-bentonite slurry walls was investigated by studying the rheological properties of bentonite suspensions, the penetration length of bentonite slurry into clean sand, and stability of the trench under in-situ hydraulic gradients. In this study, the rheological parameters of bentonite suspensions were measured at various bentonite fractions by weight from 6 to 12% with 0-3% of sodium pyrophosphate; an ionic additive to control the rheological properties of the bentonite slurries. The penetrability of the bentonite slurries through Ottawa sand was studied by injecting the slurries into sand columns at different bentonite fractions. The injection tests were performed with the permeameters having different diameters to eliminate any bias on test results due to the different size of permeameter. An empirical correlation for predicting the penetration length of bentonite slurry based on apparent viscosity, yield stress, effective particle size, relative density, and injection pressures was updated by taking into account the effects of the permeameter diameter size. Moreover, the stability of sand-bentonite slurry walls was inspected by studying the hydraulic performance of sand permeated with bentonite suspensions under increasing hydraulic gradients. The critical hydraulic gradient at which washing out of bentonite suspensions is initiated was examined. For specimens with bentonite contents less than the threshold value, the flow occurred through the sand voids and minimal washing out occurred. On the other hand, when the bentonite content was high enough to fill up all the void space between the sand particles, the flow was controlled by the clay void ratio. In this case, washing out did occur with increasing gradients accompanied by an increase in hydraulic conductivity. Accordingly, a relation between the yield stress of bentonite suspensions and the critical hydraulic conductivity was developed. / text
404

Applied statistical modeling of three-dimensional natural scene data

Su, Che-Chun 27 June 2014 (has links)
Natural scene statistics (NSS) have played an increasingly important role in both our understanding of the function and evolution of the human vision system, and in the development of modern image processing applications. Because depth/range, i.e., egocentric distance, is arguably the most important thing a visual system must compute (from an evolutionary perspective), the joint statistics between natural image and depth/range information are of particular interest. However, while there exist regular and reliable statistical models of two-dimensional (2D) natural images, there has been little work done on statistical modeling of natural luminance/chrominance and depth/disparity, and of their mutual relationships. One major reason is the dearth of high-quality three-dimensional (3D) image and depth/range database. To facilitate research progress on 3D natural scene statistics, this dissertation first presents a high-quality database of color images and accurately co-registered depth/range maps using an advanced laser range scanner mounted with a high-end digital single-lens reflex camera. By utilizing this high-resolution, high-quality database, this dissertation performs reliable and robust statistical modeling of natural image and depth/disparity information, including new bivariate and spatial oriented correlation models. In particular, these new statistical models capture higher-order dependencies embedded in spatially adjacent bandpass responses projected from natural environments, which have not yet been well understood or explored in literature. To demonstrate the efficacy and effectiveness of the advanced NSS models, this dissertation addresses two challenging, yet very important problems, depth estimation from monocular images and no-reference stereoscopic/3D (S3D) image quality assessment. A Bayesian depth estimation framework is proposed to consider the canonical depth/range patterns in natural scenes, and it forms priors and likelihoods using both univariate and bivariate NSS features. The no-reference S3D image quality index proposed in this dissertation exploits new bivariate and correlation NSS features to quantify different types of stereoscopic distortions. Experimental results show that the proposed framework and index achieve superior performance to state-of-the-art algorithms in both disciplines. / text
405

Recognizing human activity using RGBD data

Xia, Lu, active 21st century 03 July 2014 (has links)
Traditional computer vision algorithms try to understand the world using visible light cameras. However, there are inherent limitations of this type of data source. First, visible light images are sensitive to illumination changes and background clutter. Second, the 3D structural information of the scene is lost when projecting the 3D world to 2D images. Recovering the 3D information from 2D images is a challenging problem. Range sensors have existed for over thirty years, which capture 3D characteristics of the scene. However, earlier range sensors were either too expensive, difficult to use in human environments, slow at acquiring data, or provided a poor estimation of distance. Recently, the easy access to the RGBD data at real-time frame rate is leading to a revolution in perception and inspired many new research using RGBD data. I propose algorithms to detect persons and understand the activities using RGBD data. I demonstrate the solutions to many computer vision problems may be improved with the added depth channel. The 3D structural information may give rise to algorithms with real-time and view-invariant properties in a faster and easier fashion. When both data sources are available, the features extracted from the depth channel may be combined with traditional features computed from RGB channels to generate more robust systems with enhanced recognition abilities, which may be able to deal with more challenging scenarios. As a starting point, the first problem is to find the persons of various poses in the scene, including moving or static persons. Localizing humans from RGB images is limited by the lighting conditions and background clutter. Depth image gives alternative ways to find the humans in the scene. In the past, detection of humans from range data is usually achieved by tracking, which does not work for indoor person detection. In this thesis, I propose a model based approach to detect the persons using the structural information embedded in the depth image. I propose a 2D head contour model and a 3D head surface model to look for the head-shoulder part of the person. Then, a segmentation scheme is proposed to segment the full human body from the background and extract the contour. I also give a tracking algorithm based on the detection result. I further research on recognizing human actions and activities. I propose two features for recognizing human activities. The first feature is drawn from the skeletal joint locations estimated from a depth image. It is a compact representation of the human posture called histograms of 3D joint locations (HOJ3D). This representation is view-invariant and the whole algorithm runs at real-time. This feature may benefit many applications to get a fast estimation of the posture and action of the human subject. The second feature is a spatio-temporal feature for depth video, which is called Depth Cuboid Similarity Feature (DCSF). The interest points are extracted using an algorithm that effectively suppresses the noise and finds salient human motions. DCSF is extracted centered on each interest point, which forms the description of the video contents. This descriptor can be used to recognize the activities with no dependence on skeleton information or pre-processing steps such as motion segmentation, tracking, or even image de-noising or hole-filling. It is more flexible and widely applicable to many scenarios. Finally, all the features herein developed are combined to solve a novel problem: first-person human activity recognition using RGBD data. Traditional activity recognition algorithms focus on recognizing activities from a third-person perspective. I propose to recognize activities from a first-person perspective with RGBD data. This task is very novel and extremely challenging due to the large amount of camera motion either due to self exploration or the response of the interaction. I extracted 3D optical flow features as the motion descriptor, 3D skeletal joints features as posture descriptors, spatio-temporal features as local appearance descriptors to describe the first-person videos. To address the ego-motion of the camera, I propose an attention mask to guide the recognition procedures and separate the features on the ego-motion region and independent-motion region. The 3D features are very useful at summarizing the discerning information of the activities. In addition, the combination of the 3D features with existing 2D features brings more robust recognition results and make the algorithm capable of dealing with more challenging cases. / text
406

Crack Analysis in Silicon Solar Cells

Echeverria Molina, Maria Ines 01 January 2012 (has links)
Solar cell business has been very critical and challenging since more efficient and low costs materials are required to decrease the costs and to increase the production yield for the amount of electrical energy converted from the Sun's energy. The silicon-based solar cell has proven to be the most efficient and cost-effective photovoltaic industrial device. However, the production cost of the solar cell increases due to the presence of cracks (internal as well as external) in the silicon wafer. The cracks of the wafer are monitored while fabricating the solar cell but the present monitoring techniques are not sufficient when trying to improve the manufacturing process of the solar cells. Attempts are made to understand the location of the cracks in single crystal and polycrystalline silicon solar cells, and analyze the impact of such cracks in the performance of the cell through Scanning Acoustic Microscopy (SAM) and Photoluminescence (PL) based techniques. The features of the solar cell based on single crystal and polycrystalline silicon through PL and SAM were investigated with focused ion beam (FIB) cross section and scanning electron microscopy (SEM). The results revealed that SAM could be a reliable method for visualization and understanding of cracks in the solar cells. The efficiency of a solar cell was calculated using the current (I) - voltage (V) characteristics before and after cracking of the cell. The efficiency reduction ranging from 3.69% to 14.73% for single crystal, and polycrystalline samples highlighted the importance of the use of crack monitoring techniques as well as imaging techniques. The aims of the research are to improve the manufacturing process of solar cells by locating and understanding the crack in single crystal and polycrystalline silicon based devices.
407

Vadose zone processes affecting water table fluctuations: Conceptualization and modeling considerations

Shah, Nirjhar 01 June 2007 (has links)
This dissertation focuses on a variety of vadose zone processes that impact water table fluctuations. The development of vadose zone process conceptualization has been limited due to both the lack of recognition of the importance of the vadose zone and the absence of suitable field data. Recent studies have, however, shown that vadose zone soil moisture dynamics, especially in shallow water table environments, can have a significant effect on processes such as infiltration, recharge to the water table, and evapotranspiration. This dissertation, hence, attempts to elucidate approaches for modeling vadose zone soil moisture dynamics. The ultimate objective is to predict different vertical and horizontal hydrological fluxes. The first part of the dissertation demonstrates a new methodology using soil moisture and water table data collected along a flow transect. The methodology was found to be successful in the estimation of hydrological fluxes such as evapotranspiration, infiltration, runoff, etc. The observed dataset was also used to verify an exponential model developed to quantify the ground water component of total evapotranspiration. This analysis was followed by a study which analyzed the impact of soil moisture variability in the vadose zone on water table fluctuations. It was found that antecedent soil moisture conditions in the vadose zone greatly affected the specific yield values, causing a broad range of water table fluctuations for similar boundary fluxes. Hence, use of a constant specific yield value can produce inaccurate results. Having gained insight into the process of evapotranspiration and specific yield, a threshold based model to determine evapotranspiration and subsequent water table fluctuation was conceptualized and validated. A discussion of plant root water uptake and its impact on vadose zone soil moisture dynamics is presented in the latter half of this dissertation. A methodology utilizing soil moisture and water table data to determine the root water uptake from different sections of roots is also described. It was found that, unlike traditional empirical root water uptake models, the uptake was not only proportional to the root fraction, but was also dependent on the ambient soil moisture conditions. A modeling framework based on root hydraulic characteristics is provided as well. Lastly, a preliminary analysis of observed data indicated that, under certain field conditions, air entrapment and air pressurization can significantly affect the observed water table values. A modeling technique must be developed to correct such observations.
408

Assessing the consequences of hurricane-induced fragmentation of mangrove forest on habitat and nekton in Big Sable Creek, Florida

Silverman, Noah L 01 June 2006 (has links)
The passage of two major hurricanes across southwest Florida (Category 5, Labor Day Hurricane of 1935; Category 4, Hurricane Donna 1960) resulted in fragmentation of mangrove forest at Big Sable Creek, Everglades National Park. Over time forest fragmentation led to forest loss and patchy conversion to unvegetated mudflats. My goal was to determine the consequence of forest fragmentation on nekton (i.e., fish and decapod crustaceans) inhabiting the intertidal zone. I used block nets across intertidal rivulets to compare nekton leaving replicate forest and unvegetated mudflat sites from October, 2002 through April 2004. Overall nekton density (individuals per 100 m3) was significantly greater (rmANOVA, p < 0.001) for mangrove (212·100 m-3) than mudflat (26·100 m-3) habitats. Biomass (g per 100 m3) was also significantly greater for mangrove (715 g·100 m-3) than mudflat (20 g·100 m-3) habitats. Composition of the nekton assemblage also differed between habitat types (ANOSIM global R=0.416, p<0.001). Structure-associated species dominated forested sites, whereas schooling species dominated mudflats. When mangrove destruction and mortality results in fragmentation (Craighead and Gilbert, 1962; Smith et al., 1994; Wanless et al., 1994), nekton density and biomass will likely decline as a consequence.
409

Earthquake Characteristics as Imaged by the Back-Projection Method

Kiser, Eric January 2012 (has links)
This dissertation explores the capability of dense seismic array data for imaging the rupture properties of earthquake sources using a method known as back-projection. Only within the past 10 or 15 years has implementation of the method become feasible through the development of large aperture seismic arrays such as the High Sensitivity Seismograph Network in Japan and the Transportable Array in the United States. Coincidentally, this buildup in data coverage has also been accompanied by a global cluster of giant earthquakes (Mw>8.0). Much of the material in this thesis is devoted to imaging the source complexity of these large events. In particular, evidence for rupture segmentation, dynamic triggering, and frequency dependent energy release is presented. These observations have substantial implications for evaluating the seismic and tsunami hazards of future large earthquakes. In many cases, the details of the large ruptures can only be imaged by the back-projection method through the addition of different data sets and incorporating additional processing steps that enhance low-amplitude signals. These improvements to resolution can also be utilized to study much smaller events. This approach is taken for studying two very different types of earthquakes. First, a global study of the enigmatic intermediate-depth (100-300 km) earthquakes is performed. The results show that these events commonly have sub-horizontal rupture planes and suggest dynamic triggering of multiple sub-events. From these observations, a hypothesis for the generation of intermediate-depth events is proposed. Second, the early aftershock sequences of the 2004 Mw 9.1 Sumatra-Andaman and 2011 Mw 9.0 Tohoku, Japan earthquakes are studied using the back-projection method. These analyses show that many events can be detected that are not in any local or global earthquake catalogues. In particular, the locations of aftershocks in the back-projection results of the 2011 Tohoku sequence fill in gaps in the aftershock distribution of the Japan Meteorological Agency catalogue. These results may change inferences of the behavior of the 2011 mainshock, as well as the nature of future seismicity in this region. In addition, the rupture areas of the largest aftershocks can be determined, and compared to the rupture area of the mainshock. For the Tohoku event, this comparison reveals that the aftershocks contribute significantly to the cumulative failure area of the subduction interface. This result implies that future megathrust events in this region can have larger magnitudes than the 2011 event. / Earth and Planetary Sciences
410

Computational Imaging For Miniature Cameras

Salahieh, Basel January 2015 (has links)
Miniature cameras play a key role in numerous imaging applications ranging from endoscopy and metrology inspection devices to smartphones and head-mount acquisition systems. However, due to the physical constraints, the imaging conditions, and the low quality of small optics, their imaging capabilities are limited in terms of the delivered resolution, the acquired depth of field, and the captured dynamic range. Computational imaging jointly addresses the imaging system and the reconstructing algorithms to bypass the traditional limits of optical systems and deliver better restorations for various applications. The scene is encoded into a set of efficient measurements which could then be computationally decoded to output a richer estimate of the scene as compared with the raw images captured by conventional imagers. In this dissertation, three task-based computational imaging techniques are developed to make low-quality miniature cameras capable of delivering realistic high-resolution reconstructions, providing full-focus imaging, and acquiring depth information for high dynamic range objects. For the superresolution task, a non-regularized direct superresolution algorithm is developed to achieve realistic restorations without being penalized by improper assumptions (e.g., optimizers, priors, and regularizers) made in the inverse problem. An adaptive frequency-based filtering scheme is introduced to upper bound the reconstruction errors while still producing more fine details as compared with previous methods under realistic imaging conditions. For the full-focus imaging task, a computational depth-based deconvolution technique is proposed to bring a scene captured by an ordinary fixed-focus camera to a full-focus based on a depth-variant point spread function prior. The ringing artifacts are suppressed on three levels: block tiling to eliminate boundary artifacts, adaptive reference maps to reduce ringing initiated by sharp edges, and block-wise deconvolution or depth-based masking to suppress artifacts initiated by neighboring depth-transition surfaces. Finally for the depth acquisition task, a multi-polarization fringe projection imaging technique is introduced to eliminate saturated points and enhance the fringe contrast by selecting the proper polarized channel measurements. The developed technique can be easily extended to include measurements captured under different exposure times to obtain more accurate shape rendering for very high dynamic range objects.

Page generated in 0.0717 seconds