• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 294
  • 62
  • 57
  • 39
  • 29
  • 22
  • 19
  • 9
  • 9
  • 8
  • 6
  • 6
  • 5
  • 2
  • 2
  • Tagged with
  • 646
  • 93
  • 54
  • 50
  • 43
  • 40
  • 38
  • 38
  • 37
  • 36
  • 36
  • 35
  • 33
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Ansvar för publikation av missförhållanden: En jämförelse mellan företagshemligheter och trade secrets. / Civil liability for publication of misconduct: Comparing U.S. and Swedish regulation on proprietary information.

Behndig, Mattias January 2021 (has links)
No description available.
242

Spectrally Uniform Frames And Spectrally Optimal Dual Frames

Pehlivan, Saliha 01 January 2013 (has links)
Frames have been useful in signal transmission due to the built in redundancy. In recent years, the erasure problem in data transmission has been the focus of considerable research in the case the error estimate is measured by operator (or matrix) norm. Sample results include the characterization of one-erasure optimal Parseval frames, the connection between two-erasure optimal Parseval frames and equiangular frames, and some characterization of optimal dual frames. If iterations are allowed in the reconstruction process of the signal vector, then spectral radius measurement for the error operators is more appropriate then the operator norm measurement. We obtain a complete characterization of spectrally one-uniform frames (i.e., one-erasure optimal frames with respect to the spectral radius measurement) in terms of the redundancy distribution of the frame. Our characterization relies on the connection between spectrally optimal frames and the linear connectivity property of the frame. We prove that the linear connectivity property is equivalent to the intersection dependence property, and is also closely related to the well-known concept of k-independent set. For spectrally two-uniform frames, it is necessary that the frame must be linearly connected. We conjecture that it is also necessary that a two-uniform frame must be n-independent. We confirmed this conjecture for the case when N = n+1, n+2, where N is the number of vectors in a frame for an n-dimensional Hilbert space. Additionally we also establish several necessary and sufficient conditions for the existence of an alternate dual frame to make the iii iterated reconstruction to work.
243

Sector-Targeting for Controlling Nutrient Loadings: A Case Study of the North Fork of the Shenandoah River Watershed

Singh, Bibek B. 18 August 2011 (has links)
The main purpose of Total Maximum Daily Loads (TMDL) is to achieve a water quality standard. The economic costs of reducing nutrient loadings are often not taken into account during development. In this study, sector targeting is used to minimize the total cost of nutrient reduction by targeting sectors with lower costs per unit of pollution reduction. This study focuses on targeting nitrogen (N) and phosphorus (P) loading reductions from three sectors: agricultural, point source, and urban non-point source, in the North Fork watershed. Linear programming optimization models were created to determine an optimal solution that minimized total compliance cost to implement BMPs subject to targeted loading reductions in N and P in the watershed. The optimal solution for each sector using uniform allocation and sector targeting were compared for N and P loading reductions separately and N and P reductions simultaneously. The difference between sector targeting and uniform allocation showed the sector targeting was the more cost effective approach to achieve the desired nutrient reduction compared to uniform allocation. From the agricultural sector, cropland and hayland buffers provided the best options for reducing both N and P. Urban BMPs are least efficient in term of nutrient reduction and cost. Similarly, for point source upgrade, Broadway has the lowest cost of upgrade per unit of N or P reduction. This study implies that both stakeholders and policymakers can use targeting to achieve nutrient reduction goals at lower costs. The policymakers can incorporate economic considerations in the TMDL planning process which can help in developing a cost-effective tributary strategy and cost-share program. / Master of Science
244

Use of Nonlinear Volterra Theory in Predicting the Propagation of Non-uniform Flow Through an Axial Compressor

Luedke, Jonathan Glenn 07 December 2001 (has links)
Total pressure non-uniformities in an axial flow compressor can contribute to losses in aerodynamic operability through a reduction in stall margin, pressure rise and mass flow, and to loss of structural integrity through means of high cycle fatigue (HCF). HCF is a primary mechanism of blade failure caused by vibrations at levels exceeding material endurance limits. Previous research has shown total pressure distortions to be the dominant HCF driver in aero engines, and has demonstrated the damaging results of total pressure distortion induced HCF on first stage fan and compressor blade rows [Manwaring et al., 1997]. It is, however, also of interest to know how these distortion patterns propagate through a rotor stage and impact subsequent downstream stages and engine components. With current modeling techniques, total pressure distortion magnitudes can be directly correlated to induced blade vibratory levels and modes. The ability to predict downstream distortion patterns then allows for the inference of blade vibratory response of downstream blades to inlet distortion patterns. Given a total pressure distortion excitation entering a blade row, the nonlinear Volterra series can serve as a predictor of the downstream total pressure profile and therefore provide insight into the potential for HCF in downstream blade rows. This report presents the adaption of nonlinear Volterra theory to the prediction of the transport of non-uniform total pressure distortions through an axial flow compressor. The use of Volterra theory in nonlinear system modeling relies on the knowledge of Volterra kernels, which capture the behavior of a system's response characteristics. Here an empirical method is illustrated for identifying these kernels based on total pressure distortion patterns measured both upstream and downstream of a transonic rotor of modern design. A Volterra model based on these kernels has been applied to the prediction of distortion transfer at new operating points of the same rotor with promising results. Methods for improving Volterra predictions by training Volterra kernels along individual streamlines and normalizing total pressure data sets by physics-based parameters are also investigated. / Master of Science
245

Pier scour prediction in non-uniform gravel beds

Pandey, M., Oliveto, G., Pu, Jaan H., Sharma, P.K., Ojha, C.S.P. 28 July 2020 (has links)
Yes / Pier scour has been extensively studied in laboratory experiments. However, scour depth relationships based on data at the laboratory scale often yield unacceptable results when extended to field conditions. In this study, non-uniform gravel bed laboratory and field datasets with gravel of median size ranging from 2.7 to 14.25 mm were considered to predict the maximum equilibrium scour depth at cylindrical piers. Specifically, a total of 217 datasets were collected: 132 from literature sources and 85 in this study using new experiments at the laboratory scale, which constitute a novel contribution provided by this paper. From the analysis of data, it was observed that Melville and Coleman's equation performs well in the case of laboratory datasets, while it tends to overestimate field measurements. Guo's and Kim et al.'s relationships showed good agreements only for laboratory datasets with finer non-uniform sediments: deviations in predicting the maximum scour depth with non-uniform gravel beds were found to be significantly greater than those for non-uniform sand and fine gravel beds. Consequently, new K-factors for the Melville and Coleman's equation were proposed in this study for non-uniform gravel-bed streams using a curve-fitting method. The results revealed good agreements between observations and predictions, where this might be an attractive advancement in overcoming scale effects. Moreover, a sensitivity analysis was performed to identify the most sensitive K-factors.
246

A New Beamforming Approach Using 60 GHz Antenna Arrays for Multi–Beams 5G Applications

Al-Sadoon, M.A.G., Patwary, M.N., Zahedi, Y., Ojaroudi Parchin, Naser, Aldelemy, Ahmad, Abd-Alhameed, Raed 26 May 2022 (has links)
Yes / Recent studies and research have centred on new solutions in different elements and stages to the increasing energy and data rate demands for the fifth generation and beyond (B5G). Based on a new-efficient digital beamforming approach for 5G wireless communication networks, this work offers a compact-size circular patch antenna operating at 60 GHz and covering a 4 GHz spectrum bandwidth. Massive Multiple Input Multiple Output (M–MIMO) and beamforming technology build and simulate an active multiple beams antenna system. Thirty-two linear and sixty-four planar antenna array configurations are modelled and constructed to work as base stations for 5G mobile communication networks. Furthermore, a new beamforming approach called Projection Noise Correlation Matrix (PNCM) is presented to compute and optimise the fed weights of the array elements. The key idea of the PNCM method is to sample a portion of the measured noise correlation matrix uniformly in order to provide the best representation of the entire measured matrix. The sampled data will then be utilised to build a projected matrix using the pseudoinverse approach in order to determine the best fit solution for a system and prevent any potential singularities caused by the matrix inversion process. The PNCM is a low-complexity method since it avoids eigenvalue decomposition and computing the entire matrix inversion procedure and does not require including signal and interference correlation matrices in the weight optimisation process. The suggested approach is compared to three standard beamforming methods based on an intensive Monte Carlo simulation to demonstrate its advantage. The experiment results reveal that the proposed method delivers the best Signal to Interference Ratio (SIR) augmentation among the compared beamformers
247

О неравенстве Тайкова для сопряженных тригонометрических полиномов : магистерская диссертация / On the Taikov inequality for conjugate trigonometric polynomials

Серков, А. О., Serkov, A. O. January 2015 (has links)
We study a Szego type inequality between the uniform norm of a fractional derivative of a conjugate trigonometric polynomial and the uniform norm of the polynomial itself. We prove that a set of extremal polynomials in the Szego inequality for the zero-order derivative on the set of trigonometric polynomials, in addition to odd polynomials found earlier by L.V.Taikov, contains even polynomials. We also describe the whole class of extremal polynomials / Изучается неравенство типа Сеге между равномерной нормой производной дробного порядка сопряженного тригонометрического полинома и равномерной нормой самого полинома. Доказано, что в неравенстве Сеге для производной нулевого порядка на множестве тригонометрических полиномов имеются как нечетные, найденные ранее Л.В.Тайковым, так и четные экстремальные полиномы. Также полностью описан класс экстремальных полиномов для данного случая.
248

Consistency and Uniform Bounds for Heteroscedastic Simulation Metamodeling and Their Applications

Zhang, Yutong 05 September 2023 (has links)
Heteroscedastic metamodeling has gained popularity as an effective tool for analyzing and optimizing complex stochastic systems. A heteroscedastic metamodel provides an accurate approximation of the input-output relationship implied by a stochastic simulation experiment whose output is subject to input-dependent noise variance. Several challenges remain unsolved in this field. First, in-depth investigations into the consistency of heteroscedastic metamodeling techniques, particularly from the sequential prediction perspective, are lacking. Second, sequential heteroscedastic metamodel-based level-set estimation (LSE) methods are scarce. Third, the increasingly high computational cost required by heteroscedastic Gaussian process-based LSE methods in the sequential sampling setting is a concern. Additionally, when constructing a valid uniform bound for a heteroscedastic metamodel, the impact of noise variance estimation is not adequately addressed. This dissertation aims to tackle these challenges and provide promising solutions. First, we investigate the information consistency of a widely used heteroscedastic metamodeling technique, stochastic kriging (SK). Second, we propose SK-based LSE methods leveraging novel uniform bounds for input-point classification. Moreover, we incorporate the Nystrom approximation and a principled budget allocation scheme to improve the computational efficiency of SK-based LSE methods. Lastly, we investigate empirical uniform bounds that take into account the impact of noise variance estimation, ensuring an adequate coverage capability. / Doctor of Philosophy / In real-world engineering problems, understanding and optimizing complex systems can be challenging and prohibitively expensive. Computer simulation is a valuable tool for analyzing and predicting system behaviors, allowing engineers to explore different scenarios without relying on costly physical prototypes. However, the increasing complexity of simulation models leads to a higher computational burden. Metamodeling techniques have emerged to address this issue by accurately approximating the system performance response surface based on limited simulation experiment data to enable real-time decision-making. Heteroscedastic metamodeling goes further by considering varying noise levels inherent in simulation outputs, resulting in more robust and accurate predictions. Among various techniques, stochastic kriging (SK) stands out by striking a good balance between computational efficiency and statistical accuracy. Despite extensive research on SK, challenges persist in its application and methodology. These include little understanding of SK's consistency properties, an absence of sequential SK-based algorithms for level-set estimation (LSE) under heteroscedasticity, and the increasingly low computational efficiency of SK-based LSE methods in implementation. Furthermore, a precise construction of uniform bounds for the SK predictor is also missing. This dissertation aims at addressing these aforementioned challenges. First, the information consistency of SK from a prediction perspective is investigated. Then, sequential SK-based procedures for LSE in stochastic simulation, incorporating novel uniform bounds for accurate input-point classification, are proposed. Furthermore, a popular approximation technique is incorporated to enhance the computational efficiency of the SK-based LSE methods. Lastly, empirical uniform bounds are investigated considering the impact of noise variance estimation.
249

Essays on Pricing and Promotional Strategies

Chung, Hoe Sang 03 September 2013 (has links)
This dissertation contains three essays on theoretical analysis of pricing and promotional strategies. Chapter 1 serves as a brief introduction that provides a motivation and an overview of the topics covered in the subsequent chapters. In Chapter 2, we study optimal couponing strategies in a differentiated duopoly with repeat purchase. Both firms can distribute defensive coupons alone, defensive and offensive coupons together, or mass media coupons. They can also determine how many coupons to offer. Allowing consumers to change their tastes for the firms' products over time, we find that the optimal couponing strategy for the firms is to only distribute coupons to all of the customers who buy from them. The effects of intertemporally constant preferences and consumer myopia on the profitability of the optimal couponing are investigated as well. Chapter 3 examines the profitability of behavior-based price discrimination (BBPD) by duopolists producing horizontally differentiated experience goods. We consider a three-stage game in which the firms first make price discrimination decisions followed by two-stage pricing decisions. The main findings are: (i) there are two subgame perfect Nash equilibria where both firms do not collect information about consumers' purchase histories so that neither firm price discriminates and where both firms collect consumer information to practice BBPD; and (ii) BBPD is more profitable than uniform pricing if sufficiently many consumers have a poor experience with the firms' products. The asymmetric case where one firm produces experience goods and the other search goods is also investigated. Chapter 4 provides a possible explanation of the fact that one ticket price is charged for all movies (regardless of their quality) in the motion-picture industry. Considering a model a la Hotelling in which moviegoers form their beliefs about movie quality through pricing schemes to which an exhibitor commits, we characterize the conditions under which committing to uniform pricing is more profitable than committing to variable pricing. The welfare consequences of a uniform pricing commitment and some extensions of the model are discussed as well. / Ph. D.
250

Pier Scour Prediction in Non-Uniform Gravel Beds

Pandey, M., Olivetto, G., Pu, Jaan H., Sharma, P.K., Ojha, C.S.P. 16 June 2020 (has links)
Yes / Pier scour has been extensively studied in laboratory experiments. However, scour depth relationships based on data at the laboratory scale often yield unacceptable results when extended to field conditions. In this study, non-uniform gravel bed laboratory and field datasets with gravel of median size ranging from 2.7 to 14.25 mm were considered to predict the maximum equilibrium scour depth at cylindrical piers. Specifically, a total of 217 datasets were collected: 132 from literature sources and 85 in this study using new experiments at the laboratory scale, which constitute a novel contribution provided by this paper. From the analysis of data, it was observed that Melville and Coleman’s equation performs well in the case of laboratory datasets, while it tends to overestimate field measurements. Guo’s and Kim et al.’s relationships showed good agreements only for laboratory datasets with finer non-uniform sediments: deviations in predicting the maximum scour depth with non-uniform gravel beds were found to be significantly greater than those for non-uniform sand and fine gravel beds. Consequently, new K-factors for the Melville and Coleman’s equation were proposed in this study for non-uniform gravel-bed streams using a curve-fitting method. The results revealed good agreements between observations and predictions, where this might be an attractive advancement in overcoming scale effects. Moreover, a sensitivity analysis was performed to identify the most sensitive K-factors.

Page generated in 0.0616 seconds