Spelling suggestions: "subject:"found"" "subject:"sound""
141 |
Self-Esteem Among Upward Bound Students: Differences by Race and GenderButterfield, Alexandra K. 28 May 1999 (has links)
Higher education has experienced an increase in enrollment. Of the approximately 14.9 million students in higher education, 24.5% are minority students. Although this percentage is not far from the percentage of minorities in the U.S. population (24.7%), the distribution of minorities enrolled in higher education is significantly different than the distribution of minorities in the nation's population. The percentage of African Americans (10.1%) and Hispanics (7.3%) in higher education is lower than their population in the general population (12.1% and 9% respectively).
There is also an unequal distribution of enrollment in higher education based on socioeconomic status. The percentage of students from the top family income quartile attending college is 86%. The percentage of students from the bottom family income quartile attending college, however, is 52%.
The disproportionate representation by race and socioeconomic status in higher education has prompted campuses across the country to develop a variety of precollege programs. These programs provide students who are disadvantaged by race or socioeconomic status with the resources and academic skills needed to pursue higher education.
One of these precollege programs is Upward Bound. Upward Bound serves high school participants aged 13 to 19 years who are either first generation, socioeconomically disadvantaged, or both. Upward Bound staff focus primarily on promoting academic performance among participants. There is a significant body of literature that suggests self-esteem directly correlates with academic performance. However, Upward Bound staff do not purposefully offer programs to promote self-esteem among participants.
This study was designed to gain a better understanding of self-esteem among Upward Bound participants by race (majority versus minority) and gender. The Self-Esteem Index (SEI) was used to collect data. The SEI yields an overall self-esteem score as well as scores on four subscales. Data were analyzed using a series of two-way analyses of variance to explain differences by race (majority versus minority) and gender.
There were no statistically significant differences in self-esteem by race. The findings, however, reflected a trend in which majority students consistently scored higher than did minority students. There were statistically significant differences in self-esteem by gender on the Academic Competence scale, Peer Popularity scale, and Personal Security scale.
This study was significant for future practice in higher education. The results of the present study might benefit Upward Bound counselors, who might learn more about the self-esteem of Upward Bound students. The results might also inform Upward Bound students about their own self-esteem. In addition, the results of this study might provide directors of federal programs with baseline information about the self-esteem of students participating in the Upward Bound program. / Master of Arts
|
142 |
Semantic Decomposition By CoveringSripadham, Shankar B. 10 August 2000 (has links)
This thesis describes the implementation of a covering algorithm for semantic decomposition of sentences of technical patents. This research complements the ASPIN project that has a long term goal of providing an automated system for digital system synthesis from patents.
In order to develop a prototype of the system explained in a patent, a natural language processor (sentence-interpreter) is required. These systems typically attempt to interpret a sentence by syntactic analysis (parsing) followed by semantic analysis. Quite often, the technical narrative contains grammatical errors, incomplete sentences, anaphoric references and typological errors that can cause the grammatical parse to fail. In such situations, an alternate method that uses a repository of pre-compiled, simple sentences (called frames) to analyze the sentences of the patent can be a useful back up. By semantically decomposing the sentences of patents to a set of frames whose meanings are fully understood, the meaning of the patent sentences can be interpreted.
This thesis deals with the semantic decomposition of sentences using a branch and bound covering algorithm. The algorithm is implemented in C++. A number of experiments were conducted to evaluate the performance of this algorithm. The covering algorithm uses a standard branch and bound algorithm to semantically decompose sentences. The algorithm is fast, flexible and can provide good (100 % coverage for some sentences) coverage results. The system covered 67.68 % of the sentence tokens using 3459 frames in the repository. 54.25% of the frames identified by the system in covers for sentences, were found to be semantically correct. The experiments suggest that the performance of the system can be improved by increasing the number of frames in the repository. / Master of Science
|
143 |
Avaliação de métodos ótimos e subótimos de seleção de características de texturas em imagens / Evaluation of optimal and suboptimal feature selection methods applied to image texturesRoncatti, Marco Aurelio 10 July 2008 (has links)
Características de texturas atuam como bons descritores de imagens e podem ser empregadas em diversos problemas, como classificação e segmentação. Porém, quando o número de características é muito elevado, o reconhecimento de padrões pode ser prejudicado. A seleção de características contribui para a solução desse problema, podendo ser empregada tanto para redução da dimensionalidade como também para descobrir quais as melhores características de texturas para o tipo de imagem analisada. O objetivo deste trabalho é avaliar métodos ótimos e subótimos de seleção de características em problemas que envolvem texturas de imagens. Os algoritmos de seleção avaliados foram o branch and bound, a busca exaustiva e o sequential oating forward selection (SFFS). As funções critério empregadas na seleção foram a distância de Jeffries-Matusita e a taxa de acerto do classificador de distância mínima (CDM). As características de texturas empregadas nos experimentos foram obtidas com estatísticas de primeira ordem, matrizes de co-ocorrência e filtros de Gabor. Os experimentos realizados foram a classificação de regiôes de uma foto aérea de plantação de eucalipto, a segmentação não-supervisionada de mosaicos de texturas de Brodatz e a segmentação supervisionada de imagens médicas (MRI do cérebro). O branch and bound é um algoritmo ótimo e mais efiiente do que a busca exaustiva na maioria dos casos. Porém, continua sendo um algoritmo lento. Este trabalho apresenta uma nova estratégia para o branch and bound, nomeada floresta, que melhorou significativamente a eficiência do algoritmo. A avaliação dos métodos de seleção de características mostrou que os melhores subconjuntos foram aqueles obtidos com o uso da taxa de acerto do CDM. A busca exaustiva e o branch and bound, mesmo com a estratégia floresta, foram considerados inviáveis devido ao alto tempo de processamento nos casos em que o número de característica é muito grande. O SFFS apresentou os melhores resultados, pois, além de mais rápido, encontrou as soluções ótimas ou próximas das ótimas. Pôde-se concluir também que a precisão no reconhecimento de padrões aumenta com a redução do número de características e que os melhores subconjuntos freqüentemente são formados por características de texturas obtidas com técnicas diferentes / Texture features are eficient image descriptors and can be employed in a wide range of applications, such as classification and segmentation. However, when the number of features is considerably high, pattern recognition tasks may be compromised. Feature selection helps prevent this problem, as it can be used to reduce data dimensionality and reveal features which best characterise images under investigation. This work aims to evaluate optimal and suboptimal feature selection algorithms in the context of textural features extracted from images. Branch and bound, exhaustive search and sequential floating forward selection (SFFS) were the algorithms investigated. The criterion functions employed during selection were the Jeffries-Matusita (JM) distance and the minimum distance classifier (MDC) accuracy rate. Texture features were computed from first-order statistics, co-occurrence matrices and Gabor filters. Three different experiments have been conducted: classification of aerial picture of eucalyptus plantations, unsupervised segmentation of mosaics of Brodatz texture samples and supervised segmentation of MRI images of the brain. The branch and bound is an optimal algorithm and many times more eficient than exhaustive search. But is still time consuming. This work proposed a novel strategy for the branch and bound algorithm, named forest, which has considerably improved its performance. The evaluation of the feature selection methods has revealed that the best feature subsets were those computed by the MDC accuracy rate criterion function. Exhaustive search and branch and bound approaches have been considered unfeasible, due to their high processing times, especially for high dimensional data. This statement holds even for the branch and bound with the forest strategy. The SFFS approach yielded the best results. Not only was it faster, as it also was capable of finding the optimal or nearly optimal solutions. Finally, it has been observed that the precision of pattern recognition tasks increases as the number of features decreases and that the best feature subsets are those which possess features computed from distinct texture feature methods
|
144 |
Algorithmen im WirkstoffdesignThimm, Martin 31 January 2006 (has links)
Die Bestimmung der Ähnlichkeit von molekularen Strukturen und das Clustern solcher Strukturen gemäß Ähnlichkeit sind zwei zentrale Fragen im Wirkstoffdesign. Die Arbeit beschreibt im ersten Teil zwei neue Verfahren zum Vergleich von Molekülen auf 3-dimensionale Ähnlichkeit. Der erste Algorithmus benutzt als Eingabe nur die Koordinaten der Atome der zu vergleichenden Moleküle. Wir können zeigen, daß eine rein geometrische Zielfunktion in der Lage ist, Wirkungsähnlichkeit von Substanzen vorherzusagen, und daß der Algorithmus geeignet ist, Ähnlichkeiten zu finden, die mit bisherigen, einfacheren Methoden nicht gefunden werden konnten. Das zweite Verfahren nutzt zusätzlich noch die Bindungsstruktur der Eingabemoleküle. Es ist flexibel, d.h. alle Konformere der Moleküle werden simultan behandelt. Wir erhalten ein sehr schnelles Verfahren, das bei geeigneter Parametereinstellung auch beweisbar optimale Lösungen liefert. Für praktisch relevante Anwendungen erreichen wir erstmals Laufzeiten, die selbst das Durchsuchen großer Datenbanken ermöglichen. Im zweiten Teil beschreiben wir zwei Methoden, eine Menge von molekularen Strukturen so zu organisieren, daß die Suche nach geometrisch ähnlichen deutlich schneller durchgeführt werden kann als durch lineare Suche. Nach Analyse der Daten mit graphentheoretischen Methoden finden hierarchische Verfahren und repräsentantenbasierte Ansätze ihre Anwendung. Schließlich geben wir einen neuen Algorithmus zum Biclustern von Daten an, einem Problem, das bei der Analyse von Genexpressionsdaten eine wichtige Rolle spielt. Mit graphentheoretischen Methoden konstruieren wir zunächst deterministisch Obermengen von Lösungen, die danach heuristisch ausgedünnt werden. Wir können zeigen, daß dieser neue Ansatz bisherige, vergleichbare z.T. deutlich überbietet. Seine prinzipielle Einfachheit läßt anwendungsbezogene Modifikationen leicht zu. / Two important questions in drug design are the following: "How to compute the similarity of two molecules?" and "How to cluster molecules by similarity?" In the first part we describe two different approaches to compare molecules for 3D-similarity. The first algorithm just uses the 3D coordinates of the atoms as input. We show that this algorithm is able to detect similar activity or similar adverse reaction, even with a simple purely geometry based scoring function. Compared to previous simpler approaches more interesting hits are found. The connectivity structures of the molecular graphs are used by the second algorithm as additional input. This fully flexible approach -- conformers of the molecules are treated simultaneously -- may even find provably optimal solutions. Parameter settings for practically relevant instances allow running times that make it possible to even search large databases. The second part describes two methods to search a database of molecular structures. After analyzing the data with graph theoretical methods two algorithms for two different ranges of similarity are designed. Scanning the database for structures similar to a given query can be accelerated considerably. We use hierarchical methods and dominating set techniques. Finally we propose a new biclustering algorithm. Biclustering problems recently appeared mainly in the context of analysing gene expression data. Again graph theoretical methods are our main tools. In our model biclusters correspond to dense subgraphs of certain bipartite graphs. In a first phase the algorithm deterministically finds supersets of solution candidates. Thinning out these sets by heuristical methods leads to solutions. This new algorithm outperforms former comparable methods and its simple structure make it easy to customize it for practical applications.
|
145 |
Avaliação de métodos ótimos e subótimos de seleção de características de texturas em imagens / Evaluation of optimal and suboptimal feature selection methods applied to image texturesMarco Aurelio Roncatti 10 July 2008 (has links)
Características de texturas atuam como bons descritores de imagens e podem ser empregadas em diversos problemas, como classificação e segmentação. Porém, quando o número de características é muito elevado, o reconhecimento de padrões pode ser prejudicado. A seleção de características contribui para a solução desse problema, podendo ser empregada tanto para redução da dimensionalidade como também para descobrir quais as melhores características de texturas para o tipo de imagem analisada. O objetivo deste trabalho é avaliar métodos ótimos e subótimos de seleção de características em problemas que envolvem texturas de imagens. Os algoritmos de seleção avaliados foram o branch and bound, a busca exaustiva e o sequential oating forward selection (SFFS). As funções critério empregadas na seleção foram a distância de Jeffries-Matusita e a taxa de acerto do classificador de distância mínima (CDM). As características de texturas empregadas nos experimentos foram obtidas com estatísticas de primeira ordem, matrizes de co-ocorrência e filtros de Gabor. Os experimentos realizados foram a classificação de regiôes de uma foto aérea de plantação de eucalipto, a segmentação não-supervisionada de mosaicos de texturas de Brodatz e a segmentação supervisionada de imagens médicas (MRI do cérebro). O branch and bound é um algoritmo ótimo e mais efiiente do que a busca exaustiva na maioria dos casos. Porém, continua sendo um algoritmo lento. Este trabalho apresenta uma nova estratégia para o branch and bound, nomeada floresta, que melhorou significativamente a eficiência do algoritmo. A avaliação dos métodos de seleção de características mostrou que os melhores subconjuntos foram aqueles obtidos com o uso da taxa de acerto do CDM. A busca exaustiva e o branch and bound, mesmo com a estratégia floresta, foram considerados inviáveis devido ao alto tempo de processamento nos casos em que o número de característica é muito grande. O SFFS apresentou os melhores resultados, pois, além de mais rápido, encontrou as soluções ótimas ou próximas das ótimas. Pôde-se concluir também que a precisão no reconhecimento de padrões aumenta com a redução do número de características e que os melhores subconjuntos freqüentemente são formados por características de texturas obtidas com técnicas diferentes / Texture features are eficient image descriptors and can be employed in a wide range of applications, such as classification and segmentation. However, when the number of features is considerably high, pattern recognition tasks may be compromised. Feature selection helps prevent this problem, as it can be used to reduce data dimensionality and reveal features which best characterise images under investigation. This work aims to evaluate optimal and suboptimal feature selection algorithms in the context of textural features extracted from images. Branch and bound, exhaustive search and sequential floating forward selection (SFFS) were the algorithms investigated. The criterion functions employed during selection were the Jeffries-Matusita (JM) distance and the minimum distance classifier (MDC) accuracy rate. Texture features were computed from first-order statistics, co-occurrence matrices and Gabor filters. Three different experiments have been conducted: classification of aerial picture of eucalyptus plantations, unsupervised segmentation of mosaics of Brodatz texture samples and supervised segmentation of MRI images of the brain. The branch and bound is an optimal algorithm and many times more eficient than exhaustive search. But is still time consuming. This work proposed a novel strategy for the branch and bound algorithm, named forest, which has considerably improved its performance. The evaluation of the feature selection methods has revealed that the best feature subsets were those computed by the MDC accuracy rate criterion function. Exhaustive search and branch and bound approaches have been considered unfeasible, due to their high processing times, especially for high dimensional data. This statement holds even for the branch and bound with the forest strategy. The SFFS approach yielded the best results. Not only was it faster, as it also was capable of finding the optimal or nearly optimal solutions. Finally, it has been observed that the precision of pattern recognition tasks increases as the number of features decreases and that the best feature subsets are those which possess features computed from distinct texture feature methods
|
146 |
Problems, Models and Algorithms in One- and Two-Dimensional Cutting / Probleme, Modelle und Algorithmen in ein- und zweidimensionalem ZuschnittBelov, Gleb 20 January 2004 (has links) (PDF)
Within such disciplines as Management Science, Information and Computer Science, Engineering, Mathematics and Operations Research, problems of cutting and packing (C&P) of concrete and abstract objects appear under various specifications (cutting problems, knapsack problems, container and vehicle loading problems, pallet loading, bin packing, assembly line balancing, capital budgeting, changing coins, etc.), although they all have essentially the same logical structure. In cutting problems, a large object must be divided into smaller pieces; in packing problems, small items must be combined to large objects. Most of these problems are NP-hard. Since the pioneer work of L.V. Kantorovich in 1939, which first appeared in the West in 1960, there has been a steadily growing number of contributions in this research area. In 1961, P. Gilmore and R. Gomory presented a linear programming relaxation of the one-dimensional cutting stock problem. The best-performing algorithms today are based on their relaxation. It was, however, more than three decades before the first `optimum? algorithms appeared in the literature and they even proved to perform better than heuristics. They were of two main kinds: enumerative algorithms working by separation of the feasible set and cutting plane algorithms which cut off infeasible solutions. For many other combinatorial problems, these two approaches have been successfully combined. In this thesis we do it for one-dimensional stock cutting and two-dimensional two-stage constrained cutting. For the two-dimensional problem, the combined scheme provides mostly better solutions than other methods, especially on large-scale instances, in little time. For the one-dimensional problem, the integration of cuts into the enumerative scheme improves the results of the latter only in exceptional cases. While the main optimization goal is to minimize material input or trim loss (waste), in a real-life cutting process there are some further criteria, e.g., the number of different cutting patterns (setups) and open stacks. Some new methods and models are proposed. Then, an approach combining both objectives will be presented, to our knowledge, for the first time. We believe this approach will be highly relevant for industry.
|
147 |
Modelo de roteamento de veículos aplicado ao planejamento do Inventário Florestal / Vehicle routing problem applied to Inventory Forest planningMeneguzzi, Cristiane Coutinho 04 October 2011 (has links)
Made available in DSpace on 2016-12-23T13:51:54Z (GMT). No. of bitstreams: 1
Cristiane Coutinho Meneguzzi.pdf: 2106158 bytes, checksum: 65c537220893be6e9c9d64b3001fef07 (MD5)
Previous issue date: 2011-10-04 / On Forest field, studies in development of forest harvesting and transport still being the most emphasized subject, for being directly responsible for the final cost of wood. However, other different phases are a big potential for studies, as Forest Inventory. Information provided by the Forest Inventory are important for all planning of Forest Enterprise, as it bases any decision making involving forest resources. On this present research, was based on vehicle routing problem for planning this task. The vehicle routing problem and its variants has being largely studied on the last years, mainly for its applicability and efficiency for given solutions resulting in cost and distance reduction. The general objective of the present study is optimize the Inventory Forest planning from a vehicle routing problem and evaluate the importance of this technique on its productivity. Among the factors that influence this productivity, the spatial dispersion , basic feature of forest stands, it is one controllable factor from the use of technique that makes possible matches with planning. Studies shows that this match brings out significant results / Na área florestal, ainda é dada maior ênfase ao desenvolvimento de estudos envolvendo as etapas de colheita e transporte florestal, por serem diretamente responsáveis pelo custo final da madeira. Entretanto, diversas outras etapas possuem grande potencial para estudos, como é o caso do inventário florestal. Informações fornecidas pelo inventário florestal são importantes no planejamento de todo empreendimento florestal, pois subsidiam qualquer tomada de decisão envolvendo recursos florestais. Nesta pesquisa, utilizou-se o modelo de roteamento de veículos (PRV) no planejamento dessa atividade. O PRV e suas variantes vêm sendo amplamente estudados nos últimos anos, principalmente pela sua aplicabilidade e eficiência em gerar soluções apresentando redução de custo e/ou distâncias. O objetivo geral foi otimizar o planejamento da atividade de inventário florestal a partir de um modelo PRV e avaliar a importância do uso desta técnica no rendimento das atividades. Dentre os fatores que influenciam neste rendimento, a dispersão espacial, característica básica dos povoamentos florestais, é um fator controlável a partir do uso de técnicas que possibilitem associá-lo ao planejamento. Estudos mostram que essa associação traz resultados significativos
|
148 |
Um algoritmo exato para a otimização de carteiras de investimento com restrições de cardinalidade / An exact algorithm for portifolio optimization with cardinality constraintsVillela, Pedro Ferraz, 1982- 12 August 2018 (has links)
Orientador: Francisco de Assis Magalhães Gomes Neto / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-12T16:09:04Z (GMT). No. of bitstreams: 1
Villela_PedroFerraz_M.pdf: 727069 bytes, checksum: d87d64ae49bfc1a53017a463cf10b453 (MD5)
Previous issue date: 2008 / Resumo: Neste trabalho, propomos um método exato para a resolução de problemas de programação quadrática que envolvem restrições de cardinalidade. Como aplicação, empregamos o método para a obtenção da fronteira eficiente de um problema (bi-objetivo) de otimização de carteiras de investimento. Nosso algoritmo é baseado no método Branch-and-Bound. A chave de seu sucesso, entretanto, reside no uso do método de Lemke, que é aplicado para a resolução dos subproblemas associados aos nós da árvore gerada pelo Branch-and-Bound. Ao longo do texto, algumas heurísticas também são introduzidas, com o propósito de acelerar a convergência do método. Os resultados computacionais obtidos comprovam que o algoritmo proposto é eficiente. / Abstract: In this work, we propose an exact method for the resolution of quadratic programming problems involving cardinality restrictions. As an application, the algorithm is used to generate the effective Pareto frontier of a (bi-objective) portfolio optimization problem. This algorithm is based on the Branch-and-Bound method. The key to its success, however, resides in the application of Lemke's method to the resolution of the subproblems associated to the nodes of the tree generated by the Branch-and-Bound algorithm. Throughout the text, some heuristics are also introduced as a way to accelerate the performance of the method. The computational results acquired show that the proposed algorithm is efficient. / Mestrado / Otimização / Mestre em Matemática Aplicada
|
149 |
Photoacoustic Calorimetry Studies of the Earliest Events in Horse Heart Cytochrome-c FoldingWord, Tarah A. 16 September 2015 (has links)
The protein folding problem involves understanding how the tertiary structure of a protein is related to its primary structure. Hence, understanding the thermodynamics associated with the rate-limiting steps for the formation of the earliest events in folding is most crucial to understanding how proteins adopt native secondary and tertiary structures. In order to elucidate the mechanism and pattern of protein folding, an extensively studied protein, Cytochrome-c (Cc), was chosen as a folding system to obtain detailed time-resolved thermodynamic profiles for the earliest events in the protein folding process. Cytochrome-c is an ideal system for understanding the folding process for several reasons. One being that the system can unfold and refold reversibly without the loss of the covalently attached heme group. A number of studies have shown that under denaturing conditions, ferrous Cc (Fe2+Cc) heme group in the presence of carbon monoxide (CO) results in a disruption of the axial heme Methionine-80 (Met80) bond ultimately unfolding the protein. CO-photolysis of this ferrous species results in the formation of a transient unfolded protein that is poised in a non-equilibrium state with the equilibrium state being that of the native folded Fe2+Cc complex. This allows for the refolding reaction of the protein to be photo-initiated and monitor on ns - ms timescales. While CO cannot bind to the ferric form, nitrogen monoxide (NO) photo-release has been developed to photo-trigger ferric Cc (Fe3+Cc) unfolding under denaturing conditions. Photo-dissociation of NO leaves the Fe3+complex in a conformational state that favors unfolding thus allowing the early unfolding events of Fe3+Cc to be probed. Overall the results presented here involve the use of the ligands CO and NO along with photoacoustic calorimetry (PAC) to photo-trigger the folding/unfolding reaction of Cc (and modified Cc). Thus, obtaining enthalpy and molar volume changes directly associated with the initial folding/unfolding events occurring in the reaction pathways of both Fe2+ and Fe3+Cc systems that are most essential to understanding the driving forces involved in forming the tertiary native conformation. The PAC data shows that folding of proteins results from a hierarchy of events that potentially includes the formation of secondary structures, hydrophobic collapse, and/or reorganization of the tertiary complex occurring over ~ns – tens of µs time ranges. In addition, the PAC kinetic fits presented in this work is the first to report Cc folding exhibiting heterogeneous kinetics (in some cases) by utilizing a stretched exponential decay function.
|
150 |
Abenomics: Towards Brighter Future or More of the Same? / Abenomics: Vstříc světlejším zítřkům, nebo stále to samé?Pinta, Ondřej January 2014 (has links)
This thesis investigates the impact of Abenomics policies, named after the new Japanese Prime Minister Shinzo Abe, on the economy. His so-called "three arrows" agenda includes fiscal expansion, quantitative and qualitative monetary easing, and regulatory reforms. This work focuses on the assessment of the fulfillment of set goals and compares Abenomics to previous policies. Abe's cabinet succeeded in raising inflation and depreciating yen. The debt growth has almost halted and the GDP has mildly recovered. However, the economy is still far from stable. This thesis also explores further issues encountered by the Japanese economy such as the shut-down of nuclear power plants and effects of the zero lower bound constraint. This work introduces a synthetic counterfactual to assess the real results of Abenomics. This method builds a model of an alternate Japan, in which Abe had not assumed the office. The results suggest that the impact of Abenomics on the GDP per capita is slightly positive or negligible.
|
Page generated in 0.0474 seconds