• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 11
  • 2
  • 1
  • 1
  • Tagged with
  • 28
  • 18
  • 12
  • 12
  • 12
  • 12
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Sampling Algorithms for Evolving Datasets

Gemulla, Rainer 24 October 2008 (has links) (PDF)
Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up the processing of analytic queries and data-mining tasks, to enhance query optimization, and to facilitate information integration. Most of the existing work on database sampling focuses on how to create or exploit a random sample of a static database, that is, a database that does not change over time. The assumption of a static database, however, severely limits the applicability of these techniques in practice, where data is often not static but continuously evolving. In order to maintain the statistical validity of the sample, any changes to the database have to be appropriately reflected in the sample. In this thesis, we study efficient methods for incrementally maintaining a uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions, updates, and deletions. We consider instances of the maintenance problem that arise when sampling from an evolving set, from an evolving multiset, from the distinct items in an evolving multiset, or from a sliding window over a data stream. Our algorithms completely avoid any accesses to the base data and can be several orders of magnitude faster than algorithms that do rely on such expensive accesses. The improved efficiency of our algorithms comes at virtually no cost: the resulting samples are provably uniform and only a small amount of auxiliary information is associated with the sample. We show that the auxiliary information not only facilitates efficient maintenance, but it can also be exploited to derive unbiased, low-variance estimators for counts, sums, averages, and the number of distinct items in the underlying dataset. In addition to sample maintenance, we discuss methods that greatly improve the flexibility of random sampling from a system's point of view. More specifically, we initiate the study of algorithms that resize a random sample upwards or downwards. Our resizing algorithms can be exploited to dynamically control the size of the sample when the dataset grows or shrinks; they facilitate resource management and help to avoid under- or oversized samples. Furthermore, in large-scale databases with data being distributed across several remote locations, it is usually infeasible to reconstruct the entire dataset for the purpose of sampling. To address this problem, we provide efficient algorithms that directly combine the local samples maintained at each location into a sample of the global dataset. We also consider a more general problem, where the global dataset is defined as an arbitrary set or multiset expression involving the local datasets, and provide efficient solutions based on hashing.
22

How predictable are the Academy Awards?

Stoppe, Sebastian 06 March 2015 (has links) (PDF)
By conducting an explorative study it is tried to determine whether a sample of film enthusiasts can produce a similar result in judging for the 87th Academy Awards for movies in 2014 like the actual Academy members or not. An online survey has been created and the votes cast by the participants have been tabulated. It can be shown that the results of the simulated awards voting in the survey are quite similar to the actual Academy decision. However, additional adjustments and further studies are recommended to ensure the results.
23

Improved REBa₂Cu₃O₇₋ₓ (RE ═ Y, Gd) structure and superconducting properties by addition of acetylacetone in TFA-MOD precursor solutions

Erbe, Manuela, Hänisch, Jens, Freudenberg, Thomas, Kirchner, Anke, Kaskel, Stefan, Mönch, Ingolf, Schultz, Ludwig, Holzapfel, Bernhard 02 December 2019 (has links)
For developing commercially utilized high-performance high-temperature superconductors, the fabrication of biaxially textured (RE)Ba₂Cu₃O₇₋ₓ (REBCO, RE ═ Y, Gd) coated conductors via metal–organic decomposition of trifluoroacetate precursors (TFA-MOD) has become an interesting strategy for industrial scale-up due to low costs and simple operation. However, the hygroscopic nature of commonly used precursor solutions makes them very sensitive to water pollution through air humidity. This can lead to a degradation of the final microstructure, which in return deteriorates critical current densities. Here, we present a new method to overcome that problem by using a moderator of 2,4-pentanedione (acac) in a pre-existing REBCO precursor solution. Our results show that even initially low-performance solutions can be enhanced to such an extent that they finally outperform standard high-performance solutions and the temperature window for their optimal growth widens significantly. Scanning electron microscopy gives evidence of considerable microstructural improvements, e.g. avoidance of pore formation and grooves, reduction of buckling and surface granularity. X-ray investigations indicate texture improvements, and electrical measurements reveal that transport critical current densities (Jc) increase in self-field and applied magnetic fields. For YBCO, a molar ratio of acac/RE ═ 0.64 is most effective and leads to an increase of the maximum pinning force density Fmaxp from 1.0 to 2.4 GN mˉ³ at 77 K. For GdBCO, a broad window of annealing temperatures (790–840°C) is possible for films with Jc values above 2.9 MA cmˉ² and Fmaxp above 3 GN mˉ³ at 77 K.
24

Sampling Algorithms for Evolving Datasets

Gemulla, Rainer 20 October 2008 (has links)
Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up the processing of analytic queries and data-mining tasks, to enhance query optimization, and to facilitate information integration. Most of the existing work on database sampling focuses on how to create or exploit a random sample of a static database, that is, a database that does not change over time. The assumption of a static database, however, severely limits the applicability of these techniques in practice, where data is often not static but continuously evolving. In order to maintain the statistical validity of the sample, any changes to the database have to be appropriately reflected in the sample. In this thesis, we study efficient methods for incrementally maintaining a uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions, updates, and deletions. We consider instances of the maintenance problem that arise when sampling from an evolving set, from an evolving multiset, from the distinct items in an evolving multiset, or from a sliding window over a data stream. Our algorithms completely avoid any accesses to the base data and can be several orders of magnitude faster than algorithms that do rely on such expensive accesses. The improved efficiency of our algorithms comes at virtually no cost: the resulting samples are provably uniform and only a small amount of auxiliary information is associated with the sample. We show that the auxiliary information not only facilitates efficient maintenance, but it can also be exploited to derive unbiased, low-variance estimators for counts, sums, averages, and the number of distinct items in the underlying dataset. In addition to sample maintenance, we discuss methods that greatly improve the flexibility of random sampling from a system's point of view. More specifically, we initiate the study of algorithms that resize a random sample upwards or downwards. Our resizing algorithms can be exploited to dynamically control the size of the sample when the dataset grows or shrinks; they facilitate resource management and help to avoid under- or oversized samples. Furthermore, in large-scale databases with data being distributed across several remote locations, it is usually infeasible to reconstruct the entire dataset for the purpose of sampling. To address this problem, we provide efficient algorithms that directly combine the local samples maintained at each location into a sample of the global dataset. We also consider a more general problem, where the global dataset is defined as an arbitrary set or multiset expression involving the local datasets, and provide efficient solutions based on hashing.
25

Linear Approximation of Groups and Ultraproducts of Compact Simple Groups

Stolz, Abel 17 October 2013 (has links)
We derive basic properties of groups which can be approximated with matrices. These include closure of classes of such groups under group theoretic constructions including direct and inverse limits and free products. We show that metric ultraproducts of projective linear groups over fields of different characteristics are not isomorphic. We further prove that the lattice of normal subgroups in ultraproducts of compact simple groups is distributive. It is linearly ordered in the case of finite simple groups or Lie groups of bounded rank.
26

Short-term forecasting of salinity intrusion in Ham Luong river, Ben Tre province using Simple Exponential Smoothing method

Tran, Thai Thanh, Ngo, Quang Xuan, Ha, Hieu Hoang, Nguyen, Nhan Phan 13 May 2020 (has links)
Salinity intrusion in a river may have an adverse effect on the quality of life and can be perceived as a modern-day curse. Therefore, it is important to find technical ways to monitor and forecast salinity intrusion. In this paper, we designed a forecasting model using Simple Exponential Smoothing method (SES) which performs weekly salinity intrusion forecast in Ham Luong river (HLR), Ben Tre province based on historical data obtained from the Center for Hydro-meteorological forecasting of Ben Tre province. The results showed that the SES method provides an adequate predictive model for forecast of salinity intrusion in An Thuan, Son Doc, and Phu Khanh. However, the SES in My Hoa, An Hiep, and Vam Mon could be improved upon by another forecasting technique. This study suggests that the SES model is an easy-to-use modeling tool for water resource managers to obtain a quick preliminary assessment of salinity intrusion. / Xâm nhập mặn có thể gây tác động xấu đến đời sống con người, tuy nhiên nó hoàn toàn có thể dự báo được. Cho nên, một điều quan trọng là tìm được phương pháp kỹ thuật phù hợp để dự báo và giám sát xâm nhập mặn trên sông. Trong bài báo này, chúng tôi sử dụng phương pháp Simple Exponential Smoothing để dự báo xâm nhập mặn trên sông Hàm Luông, tỉnh Bến Tre. Kết quả cho thấy mô hình dự báo phù hợp cho các vị trí An Thuận, Sơn Đốc, và Phú Khánh. Tuy nhiên, các vị trí Mỹ Hóa, An Hiệp, và Vàm Mơn có thể tìm các phương pháp khác phù hợp hơn. Phương pháp Simple Exponential Smoothing rất dễ ứng dụng trong quản lý nguồn nước dựa vào việc cảnh báo xâm nhập mặn.
27

A characterization of the groups PSLn(q) and PSUn(q) by their 2-fusion systems, q odd

Kaspczyk, Julian 31 May 2024 (has links)
Let q be a nontrivial odd prime power, and let 𝑛 ≥ 2 be a natural number with (𝑛, 𝑞) ≠ (2, 3). We characterize the groups 𝑃𝑆𝐿𝑛(𝑞) and 𝑃𝑆𝑈𝑛(𝑞) by their 2-fusion systems. This contributes to a programme of Aschbacher aiming at a simplified proof of the classification of finite simple groups.
28

How predictable are the Academy Awards?

Stoppe, Sebastian January 2015 (has links)
By conducting an explorative study it is tried to determine whether a sample of film enthusiasts can produce a similar result in judging for the 87th Academy Awards for movies in 2014 like the actual Academy members or not. An online survey has been created and the votes cast by the participants have been tabulated. It can be shown that the results of the simulated awards voting in the survey are quite similar to the actual Academy decision. However, additional adjustments and further studies are recommended to ensure the results.

Page generated in 0.0423 seconds