• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 66
  • 34
  • 19
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 431
  • 125
  • 93
  • 36
  • 36
  • 31
  • 28
  • 27
  • 26
  • 25
  • 24
  • 23
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Joint Relationships between Civic Involvement, Higher Education, and Selected Personal Characteristics among Adults in the United States

Blanks, Felica Wooten 26 April 2000 (has links)
American democracy fosters the common good of society by allowing citizen involvement in government. Sustaining American democracy depends on civic involvement among citizens. Civic involvement, which consists of citizens' informed involvement in government, politics, and community life, is a desired behavior among adult citizens in the United States and it is a desired outcome of higher education. However, people in the latter part of the twentieth century have questioned the extent to which higher education makes a difference in civic involvement among adults in the United States. College educators are challenged to explain the relationship between higher education and civic involvement among adults in the 1990s. The purpose of the present study is to investigate the relationship between higher education and civic involvement. The researcher approached this issue by examining relationships between measures of civic involvement and personal characteristics such as education level, race, gender, age, and socioeconomic status among adults in the United States. The researcher compared joint relationships between civic involvement and personal characteristics among college graduates with the joint relationships between civic involvement and personal characteristics among adults with some college education and adults with no college education. Data from the Adult Civic Involvement component of the National Household Education Survey of 1996 (NHES:96) were analyzed. This survey was conducted by the National Center for Education Statistics. Using list-assisted, random digit dialing methods and computer assisted telephone interviewing techniques, data were collected from a nationally representative sample of non-institutionalized civilians who were eighteen years of age or older at the time of the survey. Data were collected regarding respondents' (a) personal characteristics, (b) use of information sources, (c) knowledge of government, (d) community participation, and (e) political participation. The selected technique for analyzing data was canonical correlation analysis (CCA), which is a form of multivariate analysis that subsumes multiple regression, multivariate analysis of variance, and discriminant analysis. The results revealed that civic involvement among adults in the United States is moderate at best. Low to moderate civic involvement among adults is mostly attributed to the absence of civic behaviors among adults with no college education. Among adults, overall civic involvement has strong relationships with education level, race, gender, age, and socioeconomic status. While the relationship between higher education and civic involvement is strong, there are significant differences in civic involvement among college graduates when grouped according to race, gender, age, and socioeconomic status. White male college graduates with high incomes tend to demonstrate the attributes of civic involvement to a greater extent than other groups. Among adults with some college education, overall civic involvement is characteristic of older males.Similarly, older adults with no college education demonstrate civic involvement to a greater extent than younger adults with no college education. These findings are consistent with the results of previous studies. The findings also extend the results of previous studies by explaining the relationships between civic involvement and multiple personal characteristics when analyzed simultaneously. The findings suggest a need for ongoing analyses of civic involvement among adult citizens and among college students. The results further imply a need for college personnel to identify and implement strategies that will improve the civic outcomes of higher education for minorities and females in various age and income categories. / Ph. D.
182

Breaking the curse of dimensionality in electronic structure methods: towards optimal utilization of the canonical polyadic decomposition

Pierce, Karl Martin 27 January 2022 (has links)
Despite the fact that higher-order tensors (HOTs) plague electronic structure methods and severely limits the modeling of interesting chemistry problems, introduction and application of higher-order tensor (HOT) decompositions, specifically the canonical polyadic (CP) decomposition, is fairly limited. The CP decomposition is an incredibly useful sparse tensor factorization that has the ability to disentangle all correlated modes of a tensor. However the complexities associated with CP decomposition have made its application in electronic structure methods difficult. Some of the major issues related to CP decomposition are a product of the mathematics of computing the decomposition: determining the exact CP rank is a non-polynomially hard problem, finding stationary points for rank-R approximations require non-linear optimization techniques, and inexact CP approximations can introduce a large degree of error into tensor networks. While other issues are a result of the construction of computer architectures. For example, computer processing units (CPUs) are organized in a way to maximize the efficiency of dense linear algebra and, thus, the performance of routine tensor algebra kernels, like the Khatri-Rao product, is limited. In this work, we seek to reduce the complexities associated with the CP decomposition and create a route for others to develop reduced-scaling electronic structure theory methods using the CP decomposition. In Chapter 2, we introduce the robust tensor network approximation. This approximation is a way to, in general, eliminate the leading-order error associated with approximated tensors in a network. We utilize the robust network approximation to significantly increase the accuracy of approximating density fitting (DF) integral tensors using rank-deficient CP decompositions in the particle-particle ladder (PPL) diagram of the coupled cluster method with single and double substitutions (CCSD). We show that one can produce results with negligible error in chemically relevant energy differences using a CP rank roughly the same size as the DF fitting basis; which is a significantly smaller rank requirement than found using either a nonrobust approximation or similar grid initialized CP approximations (the pseudospectral (PS) and tensor hypercontraction (THC) approximations). Introduction of the CP approximation, formally, reduces the complexity of the PPL diagram from 𝓞(N⁶) to 𝓞(N⁵) and, using the robust approximation, we are able to observe a cost reduction in CCSD calculations for systems as small as a single water molecule. In Chapter 3, we further demonstrate the utility of the robust network approximation and, in addition, we construct a scheme to optimize a grid-free CP decomposition of the order-four Coulomb integral tensor in 𝓞(N⁴) time. Using these ideas, we reduce the complexity of ten bottleneck contractions from 𝓞(N⁶) to 𝓞(N⁵) in the Laplace transform (LT) formulation of the perturbative triple, (T), correction to CCSD. We show that introducing CP into the LT (T) method with a CP rank roughly the size of the DF fitting basis reduces the cost of computing medium size molecules by a factor of about 2.5 and introduces negligible error into chemically relevant energy differences. Furthermore, we implement these low-cost algorithms using newly developed, optimized tensor algebra kernels in the massively-parallel, block-sparse TiledArray [Calvin, et. al Chemical Reviews 2021 121 (3), 1203-1231] tensor framework. / Doctor of Philosophy / Electronic structure methods and accurate modeling of quantum chemistry have developed alongside the advancements in computer infrastructures. Increasingly large and efficient computers have allowed researchers to model remarkably large chemical systems. Sadly, for as fast as computer infrastructures grow (Moores law predicts that the number of transistors in a computer will double every 18 months) the cost of electronic structure methods grows more quickly. One of the least expensive electronic structure methods, Hartree Fock (HF), grows quartically with molecular size; this means that doubling the size of a molecule increase the number of computer operations by a factor of 16. However, it is known that when chemical systems become sufficiently large, the amount of physical information added to the system grows linearly with system size.[Goedecker, et. al. Comput. Sci. Eng., 2003, 5, (4), 14-21] Unfortunately, standard implementations of electronic structure methods will never achieve linear scaling; the disparity between actual cost and physical scaling of molecules is a result of storing and manipulating data using dense tensors and is known as the curse of dimensionality.[Bellman, Adaptive Control Processes, 1961, 2045, 276] Electronic structure theorists, in their desire to apply accurate methods to increasingly large systems, have known for some time that the cost of conventional algorithms is unreasonably high. These theorists have found that one can reveal sparsity and develop reduced-complexity algorithms using matrix decomposition techniques. However, higher-order tensors (HOTs), tensors with more than two modes, are routinely necessary in algorithm formulations. Matrix decompositions applied to HOTs are not necessarily straight-forward and can have no effect on the limiting behavior of an algorithm. For example, because of the positive definiteness of the Coulomb integral tensor, it is possible to perform a Cholesky decomposition (CD) to reduce the complexity of tensor from an order-4 tensor to a product of order-3 tensors.[Beebe, et. al. Int. J. Quantum Chem., 1977, 12, 683-705] However, using the CD approximated Coulomb integral tensors it is not possible to reduce the complexity of popular methods such as Hartree-Fock or coupled cluster theory. We believe that the next step to reducing the complexity of electronic structure methods is through the accurate application of HOT decompositions. In this work, we only consider a single HOT decomposition: the canonical polyadic (CP) decomposition which represents a tensor as a polyadic sum of products. The CP decomposition disentangles all modes of a tensor by representing an order-N tensor as N order-2 tensors. In this work, we construct the CP decomposition of tensors using algebraic optimization. Our goal, here, is to tackle one of the biggest issues associated with the CP decomposition: accurately approximating tensors and tensor networks. In Chapter 2, we develop a robust formulation to approximate tensor networks, a formulation which removes the leading-order error associated with tensor approximations in a network.[Pierce, et. al. J. Chem. Theory Comput., 2021 17 (4), 2217- 2230] We apply a robust CP approximation to the coupled cluster method with single and double substitutions (CCSD) to reduce the overall cost of the approach. Using this robust CP approximation we can compute CCSD, on average, 2.5-3 times faster and introduce negligibly small error in chemically relevant energy values. Furthermore in Chapter 3, we again use the robust CP network approximation in conjunction with a novel, low cost approach to compute order-four CP decompositions, to reduce the cost of 10 high cost computations in the the perturbative triple, (T), correction to CCSD. By removing these computations, we are able to reduce the cost of (T) by a factor of about 2.5 while introducing significantly small error.
183

On Refinements of Van der Waerden's Theorem

Farhangi, Sohail 28 October 2016 (has links)
We examine different methods of generalize Van der Waerden's Theorem, the Multidimensional Van der Waerden Theorem, the Canonical Van der Waerden Theorem, and other Variants. / Master of Science
184

Discipline-Independent Text Information Extraction from Heterogeneous Styled References Using Knowledge from the Web

Park, Sung Hee 11 July 2013 (has links)
In education and research, references play a key role. They give credit to prior works, and provide support for reviews, discussions, and arguments. The set of references attached to a publication can help describe that publication, can aid with its categorization and retrieval, can support bibliometric studies, and can guide interested readers and researchers. If suitably analyzed, that set can aid with the analysis of the publication itself, especially regarding all its citing passages. However, extracting and parsing references are difficult problems. One concern is that there are many styles of references, and identifying what style was employed is problematic, especially in heterogeneous collections of theses and dissertations, which cover many fields and disciplines, and where different styles may be used even in the same publication. We address these problems by drawing upon suitable knowledge found in the WWW. In particular, we use appropriate lists (e.g., of names, cities, and other types of entities). We use available information about the many reference styles found, in a type of reverse engineering. We use available references to guide machine learning. In particular, we research a two-stage classifier approach, with multi-class classification with respect to reference styles, and partially solve the problem of parsing surface representations of references. We describe empirical evidence for the effectiveness of our approach and plans for improvement of our method. / Ph. D.
185

Optimal, Multiplierless Implementations of the Discrete Wavelet Transform for Image Compression Applications

Kotteri, Kishore 12 May 2004 (has links)
The use of the discrete wavelet transform (DWT) for the JPEG2000 image compression standard has sparked interest in the design of fast, efficient hardware implementations of the perfect reconstruction filter bank used for computing the DWT. The accuracy and efficiency with which the filter coefficients are quantized in a multiplierless implementation impacts the image compression and hardware performance of the filter bank. A high precision representation ensures good compression performance, but at the cost of increased hardware resources and processing time. Conversely, lower precision in the filter coefficients results in smaller, faster hardware, but at the cost of poor compression performance. In addition to filter coefficient quantization, the filter bank structure also determines critical hardware properties such as throughput and power consumption. This thesis first investigates filter coefficient quantization strategies and filter bank structures for the hardware implementation of the biorthogonal 9/7 wavelet filters in a traditional convolution-based filter bank. Two new filter bank properties—"no-distortion-mse" and "deviation-at-dc"—are identified as critical to compression performance, and two new "compensating" filter coefficient quantization methods are developed to minimize degradation of these properties. The results indicate that the best performance is obtained by using a cascade form for the filters with coefficients quantized using the "compensating zeros" technique. The hardware properties of this implementation are then improved by developing a cascade polyphase structure that increases throughput and decreases power consumption. Next, this thesis investigates implementations of the lifting structure—an orthogonal structure that is more robust to coefficient quantization than the traditional convolution-based filter bank in computing the DWT. Novel, optimal filter coefficient quantization techniques are developed for a rational and an irrational set of lifting coefficients. The results indicate that the best quantized lifting coefficient set is obtained by starting with the rational coefficient set and using a "lumped scaling" and "gain compensation" technique for coefficient quantization. Finally, the image compression properties and hardware properties of the convolution and lifting based DWT implementations are compared. Although the lifting structure requires fewer computations, the cascaded arrangement of the lifting filters requires significant hardware overhead. Consequently, the results depict that the convolution-based cascade polyphase structure (with "<i>z</i>₁-compensated" coefficients) gives the best performance in terms of image compression performance and hardware metrics like throughput, latency and power consumption. / Master of Science
186

Developing an index of biotic integrity (IBI) for warmwater wadeable streams in Virginia

Smogor, Roy A. 01 November 2008 (has links)
The index of biotic integrity (IBI) comprises several fish-assemblage attributes, called metrics, that reflect how a site differs from least-disturbed (by anthropogenic influences) conditions. Understanding how metrics at least-disturbed sites vary across landscape classes (e.g., physiographies, ecoregions) and stream sizes helps one determine appropriate regions and stream-size ranges in which to develop and use the IBI. The IBI’s utility depends on how accurately and reliably each metric reflects disturbance. I make recommendations for developing the IBI for use in Virginia. I examined metric variation across landscape classes: physiographies, ecoregions, and drainage groups; and across stream sizes. I examined intra-region relations between metrics and disturbance measures and whether relations met conventional IBI assumptions. Taxonomic metrics (e.g., number of native minnow species) and reproductive metrics (e.g., proportion of individuals as lithophils) varied more across physiographies than across ecoregions or drainages. Trophic metrics (e.g., proportion as invertivores) varied least across landscape classes and most with stream size. For Virginia, I recommend three regions: Coastal Plain, Piedmont, and Mountain, in which to develop and use distinct versions of the IBI. In Coastal Plain, disturbance-vs-metric relations were mostly contrary to IBI assumptions. In Piedmont, trophic and tolerance metrics best reflected disturbance and met IBI assumptions; in Mountain, reproductive metrics did so. Disturbance measures accounted for about 20% of the variance in metrics, suggesting that my data incompletely represented disturbance effects on fish. Until further validation, I recommend that each regional IBI retain at least two taxonomic, two trophic, two reproductive, and one tolerance metric. / Master of Science
187

Using Meyer's Twilight in the secondary classroom

Miller, Tierney 01 January 2010 (has links)
Stephenie Meyer's series Twilight has swept the nation and the world. Everywhere you go, the names Edward and Bella seem to have punctured the vernacular. People are obsessed with the characters, the movie, the actors, and the author. Mothers, fathers, sons, and daughters all around the world are reading the series. The first Twilight novel has been in the top 100 bestsellers list on Amazon.com for 735 days (Amazon.com, 2009). The four book series has been on the New York Times Best Sellers list for 121 weeks as of December 4, 2009 (NY times.com, 2009). The book has also been translated into 20 different languages ("Bio", n.d.). The Twilight movie premiered in November 2008 at number one, bringing in 70 million dollars during its opening weekend ("Bio", n.d.). But one just has to walk into a bookstore or even Wal-Mart with their giant book and memorabilia displays to understand the Twilight phenomenon. This study considers how this young adult novel can be transformed into a learning opportunity for secondary students. The study explores in-depth the use of young adult novels in the classroom and their ability to teach students various concepts. The main focus of this research is Twilight and how it can be used in the classroom to teach canonical literary elements such as symbolism and author's purpose.
188

Canonical correlation analysis of aggravated robbery and poverty in Limpopo Province

Rwizi, Tandanai 05 1900 (has links)
The study was aimed at exploring the relationship between poverty and aggravated robbery in Limpopo Province. Sampled secondary data of aggravated robbery of- fenders, obtained from the South African Police (SAPS), Polokwane, was used in the analysis. From empirical researches on poverty and crime, there are some deductions that vulnerability to crime is increased by poverty. Poverty set was categorised by gender, employment status, marital status, race, age and educational attainment. Variables for aggravated robbery were house robbery, bank robbery, street/common robbery, carjacking, truck hijacking, cash-in-transit and business robbery. Canonical correlation analysis was used to make some inferences about the relationship of these two sets. The results revealed a signi cant positive correlation of 0.219(p-value = 0.025) between poverty and aggravated robbery at ve per cent signi cance level. Of the thirteen variables entered into the poverty-aggravated model, ve emerged as sta- tistically signi cant. These were gender, marital status, employment status, common robbery and business robbery. / Mathematical Sciences / M. Sc. (Statistics)
189

Das Antihelminthikum Niclosamid inhibiert das Wachstum kolorektaler Karzinomzelllinien durch Modulation des kanonischen und des nicht-kanonischen Wnt-Signalweges / Anthelmintic niclosamide inhibits colorectal cancer cell lines via modulation of the canonical and non-canonical Wnt signalling pathway

Monin, Malte Benedikt 10 February 2016 (has links)
Die Wnt/ β-Catenin-Signaltransduktion nimmt eine exponierte Stellung in der kolorektalen Karzinogenere ein. Niclosamid ist ein Derivat der Salicylsäure, das bei Bandwurm- infektionen eingesetzt wird. Es konnte gezeigt werden, dass Niclosamid den Wnt/ β-Catenin-Signalweg moduliert. Ziel der vorliegenden Arbeit war es, den therapeutischen Einsatz des Niclosamids beim kolorektalen Karzinom zu evaluieren. Die Zellproliferation von kolorektalen Karzinomzelllinien (humane SW480 und SW620 Zellen sowie CC531 Zellen einer Ratte) und von Rattenfibroblasten wurde nach 12 und 24 Stunden Inkubation mit Niclosamid durch lichtmikroskopische Zellzahlbestimmungen beurteilt. Die Apoptoseraten wurden mit einem Zelltod-Assay ermittelt. Eine Immunfluoreszenzfärbung gab Aufschluss über das Expressionsmuster von aktivem β-Catenin. Die Promotoraktivität des LEF/ TCF-Transkriptionsfaktors wurde nach Transfektion mit TOPflash mithilfe eines Luciferase Assays analysiert. Die Genexpression von Wnt-modulierenden Faktoren (Bcl-9 und Wif1), von Komponenten des ß-Catenin- Degradationskomplexes (Axin2 und GSK 3β), von kanonischen Zielgenen (Met, MMP7 und Cyclin D1) und von c-jun als Schlüsselprotein des nicht-kanonischen Wnt/ JNK-Signalweges wurde in der RT-PCR untersucht. Auf Proteinebene wurden zur Bestätigung zusätzlich Western Blots mit Antikörpern gegen aktives β-Catenin und c-jun durchgeführt. Die Zellproliferation kolorektaler Karzinomzelllinien wurde dosisabhängig inhibiert, und Niclosamid führte zu Apoptose. Nach Inkubation mit Niclosamid kam es nicht zur Umverteilung von aktivem β-Catenin von der nukleären in die zytosolische Fraktion. Die Wnt-Promotor-Aktivität von LEF/ TCF wurde nach 12 Stunden Inkubation mit 10 und 50 μM Niclosamid jedoch signifikant gesenkt. Kanonische Wnt-Zielgene (Met, MMP7 und Cyclin D1) sowie der Koaktivator Bcl-9 wurden auf Transkriptionsebene gehemmt, während das nicht-kanonische Schlüsselprotein c-jun aktiviert wurde. Fasst man zusammen, so führt die Inkubation mit Niclosamid zu inhibitorischen Effekten auf kolorektale Karzinomzelllinien und zu einer reduzierten kanonischen Wnt-Aktivität. Diese Effekte können durch eine gestörte Formation des Triple-Komplexes aus Bcl-9, β- Catenin und LEF/ TCF und einer Aktivierung von c-jun und damit des nicht-kanonischen Wnt/ JNK-Signalweges bedingt sein. In in vivo-Untersuchungen beabsichtigen wir, in einem Tiermodell die Daten zu verifizieren und so den Einsatz des Niclosamids als Option für Patienten mit metastasiertem kolorektalem Karzinom weiterführend zu beurteilen.
190

Towards on-line domain-independent big data learning : novel theories and applications

Malik, Zeeshan January 2015 (has links)
Feature extraction is an extremely important pre-processing step to pattern recognition, and machine learning problems. This thesis highlights how one can best extract features from the data in an exhaustively online and purely adaptive manner. The solution to this problem is given for both labeled and unlabeled datasets, by presenting a number of novel on-line learning approaches. Specifically, the differential equation method for solving the generalized eigenvalue problem is used to derive a number of novel machine learning and feature extraction algorithms. The incremental eigen-solution method is used to derive a novel incremental extension of linear discriminant analysis (LDA). Further the proposed incremental version is combined with extreme learning machine (ELM) in which the ELM is used as a preprocessor before learning. In this first key contribution, the dynamic random expansion characteristic of ELM is combined with the proposed incremental LDA technique, and shown to offer a significant improvement in maximizing the discrimination between points in two different classes, while minimizing the distance within each class, in comparison with other standard state-of-the-art incremental and batch techniques. In the second contribution, the differential equation method for solving the generalized eigenvalue problem is used to derive a novel state-of-the-art purely incremental version of slow feature analysis (SLA) algorithm, termed the generalized eigenvalue based slow feature analysis (GENEIGSFA) technique. Further the time series expansion of echo state network (ESN) and radial basis functions (EBF) are used as a pre-processor before learning. In addition, the higher order derivatives are used as a smoothing constraint in the output signal. Finally, an online extension of the generalized eigenvalue problem, derived from James Stone’s criterion, is tested, evaluated and compared with the standard batch version of the slow feature analysis technique, to demonstrate its comparative effectiveness. In the third contribution, light-weight extensions of the statistical technique known as canonical correlation analysis (CCA) for both twinned and multiple data streams, are derived by using the same existing method of solving the generalized eigenvalue problem. Further the proposed method is enhanced by maximizing the covariance between data streams while simultaneously maximizing the rate of change of variances within each data stream. A recurrent set of connections used by ESN are used as a pre-processor between the inputs and the canonical projections in order to capture shared temporal information in two or more data streams. A solution to the problem of identifying a low dimensional manifold on a high dimensional dataspace is then presented in an incremental and adaptive manner. Finally, an online locally optimized extension of Laplacian Eigenmaps is derived termed the generalized incremental laplacian eigenmaps technique (GENILE). Apart from exploiting the benefit of the incremental nature of the proposed manifold based dimensionality reduction technique, most of the time the projections produced by this method are shown to produce a better classification accuracy in comparison with standard batch versions of these techniques - on both artificial and real datasets.

Page generated in 0.0462 seconds