1 |
The role of solvent extraction in the chemical characterization of corn stover feedstockThammasouk, Khamphet 29 May 1996 (has links)
The consequences of extracting com stover feedstock with either 95% ethanol or hot
water prior to the chemical analysis of the macrocomponents of that feedstock have been
determined. Reports by others have recommended the removal of extraneous substance
by solvent extraction prior to chemical analyses (Browning, 1967; TAPPI, 1988). The
95% ethanol extraction evaluated in this study is currently the "standard" method
recommended by the National Renewable Energy Laboratory, Golden, Co. Hot water
extractions were tested as a simple, less time consuming and less expensive alternative to
ethanol extractions. Compositional analyses involved the quantification of glycans, Klason
lignin, acid soluble lignin, ash, protein, acetic acid, and uronic acids.
The summative analysis of native, ethanol extracted and water extracted
feedstocks were all in the range of 97 to 98%. Ethanol extractions removed 4.9% of the
feedstock dry weight, compared to 17.2% of the dry matter being extracted with hot
water. The extractives obtained via ethanol had negligible amounts of glycans. In
contrast, the water extracted solids contained nearly 10% of the native feedstock total
glucan. Pre-extracting the feedstock with ethanol had little effect, relative to the native
feedstock, on the quantification of glycan components. In contrast, the water extracted
feedstock measured significantly lower in total glucans and total glycans than the native
feedstock. The lower values associated with the water extraction were due to the actual
extraction of glucans from the feedstock, and not due to analytical interferences associated
with the extractives. Ethanol and water extracted feedstocks measured significantly lower
in Klason lignin than the corresponding native feedstock. This was presumably due to the
removal of Klason lignin impurities present in the native feedstock, and not due to the
extraction of lignin itself.
The combined results from this study indicate that an informative approach to the
analysis of com stover feedstock would include the pre-extraction of the feedstock with
hot water prior to further analyses. The appropriate macrocomponent analyses should
then be done on both the extracted feedstock and the "extractives" obtained from that
feedstock. Analysis of the extracted feedstock, as compared to the native feedstock,
would provide more accurate estimates of the cellulose and lignin content of the
feedstock. The summative analysis of both the extracted solids and the extractives will
provide a reliable estimate of the total amount of carbohydrate potentially available in the
feedstock for microbial fermentation to ethanol. / Graduation date: 1996
|
2 |
Influence of extractives on the chemical analysis of switchgrassTandjo, Djuhartini 30 May 1996 (has links)
This thesis summarizes an investigation into the need for removing extractives
from herbaceous biomass feedstocks prior to their chemical characterization. Switchgrass
(Panicum virgatum), was used in this study as a representative herbaceous biomass
feedstock. The influence of extractives on the chemical analysis of switchgrass was done
by comparing the composition of native switchgrass and solvent-extracted switchgrass
preparations. Solvent-extracted switchgrass was prepared by extracting the native
feedstock with either 95% ethanol, hot water or sequentially with ethanol and then water.
Each of the feedstocks was analyzed for glycans, Klason lignin, acid soluble lignin,
protein, ash, acetic acid and uronic acids. The results demonstrate that the extractives in
native switchgrass significantly interfere with the analysis of Klason lignin. The lignin
content of the feedstock was overestimated if the extractives were not removed prior to
the analysis. The extractives in switchgrass did not affect glycan analyses. However,
some soluble sugars are removed from the feedstock during the solvent extraction
process. Total extractives removed by ethanol, water and ethanol/water amounted to
9.74%, 16.42%, and 19.11% of the feedstocks total solids, respectively. These amounts
of extractives increased Klason lignin values by 4%, 4.5%, and 6.5% (on a wt percent of
total solids), respectively. Most of the extractives in switchgrass are water-soluble and
approximately one-quarter of these extractives measure as Klason Lignin. The removal of
the water soluble extractives from the feedstock improved the mass closure values for the
feedstock's summative analysis. Successive ethanol and hot water extraction has
successfully removed most of the extractives in switchgrass yielding 100.4% mass balance.
The recommended approach for the analysis of herbaceous biomass feedstocks will
include sequential 95% ethanol and hot water extraction followed by chemical analysis on
both the preextracted substrate and the extractives obtained from that substrate. / Graduation date: 1996
|
3 |
Chemometrics applied to the discrimination of synthetic fibers by microspectrophotometryReichard, Eric Jonathan 03 January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Microspectrophotometry is a quick, accurate, and reproducible method to compare colored fibers for forensic purposes. The use of chemometric techniques applied to spectroscopic data can provide valuable discriminatory information especially when looking at a complex dataset. Differentiating a group of samples by employing chemometric analysis increases the evidential value of fiber comparisons by decreasing the probability of false association. The aims of this research were to (1) evaluate the chemometric procedure on a data set consisting of blue acrylic fibers and (2) accurately discriminate between yellow polyester fibers with the same dye composition but different dye loadings along with introducing a multivariate calibration approach to determine the dye concentration of fibers. In the first study, background subtracted and normalized visible spectra from eleven blue acrylic exemplars dyed with varying compositions of dyes were discriminated from one another using agglomerative hierarchical clustering (AHC), principal component analysis (PCA), and discriminant analysis (DA). AHC and PCA results agreed showing similar spectra clustering close to one another. DA analysis indicated a total classification accuracy of approximately 93% with only two of the eleven exemplars confused with one another. This was expected because two exemplars consisted of the same dye compositions. An external validation of the data set was performed and showed consistent results, which validated the model produced from the training set. In the second study, background subtracted and normalized visible spectra from ten yellow polyester exemplars dyed with different concentrations of the same dye ranging from 0.1-3.5% (w/w), were analyzed by the same techniques. Three classes of fibers with a classification accuracy of approximately 96% were found representing low, medium, and high dye loadings. Exemplars with similar dye loadings were able to be readily discriminated in some cases based on a classification accuracy of 90% or higher and a receiver operating characteristic area under the curve score of 0.9 or greater. Calibration curves based upon a proximity matrix of dye loadings between 0.1-0.75% (w/w) were developed that provided better accuracy and precision to that of a traditional approach.
|
4 |
Developing a methodology model and writing a documentation template for network analysisSkagerlind, Mikael January 2016 (has links)
This report focuses on finding best practices and a better methodology when performing computer network analysis and troubleshooting. When network analysis is performed, computer network data packets are captured using data capturing software. The data packets can then be analysed through a user interface to reveal potential faults in the network. Network troubleshooting is focusing more on methodology when finding a fault in a network. The thesis work was performed at Cygate where they have recently identified needs for an updated network analysis methodology and a documentation template when documenting the network analysis results. Thus, the goal of this thesis has been to develop an elaborated methodology and discover best practices for network analysis and to write a documentation template for documenting network analysis work. As a part of discovering best practices and a methodology for network analysis, two laboratory tests were performed to gather results and analyse them. To avoid getting too many results but to still keep the tests within the scope of this thesis, the laboratory tests were limited to four network analysis tools and two test cases that are explained below. In the first laboratory test during three different test sequences, voice traffic (used in IP-phones and Skype etc.) is sent in the network using a computer program. In two of the test sequences other traffic is also congesting the network to disturb the sensitive voice traffic. The program used to send the voice traffic then outputs values; packet delay, jitter (variation in delay) and packet loss. Looking at these values, one can decide if the network is fit for carrying the sensitive voice traffic. In two of the test cases, satisfying results were gathered, but in one of them the results were very bad due to high packet loss. The second laboratory test focused more on methodology than gathering and analysing results. The goal of the laboratory test was to find and prove what was wrong with a slow network, which is a common fault in today’s networks due to several reasons. In this case, the network was slow due to large amounts of malicious traffic congesting the network; this was proven using different commands in the network devices and using different network analysis tools to find out what type of traffic was flowing in the network. The documentation template that was written as part of this thesis contains appealing visuals and explains some integral parts for presenting results when network analysis has been performed. The goal of the documentation template was an easy-to-use template that could be filled in with the necessary text under each section to simplify the documentation writing. The template contains five sections (headlines) that contain an explanation under it with what information is useful to have under that section. Cygate’s network consultants will use the documentation template when they are performing network analysis. For future work, the laboratory test cases could be expanded to include Quality of Service (QoS) as well. QoS is a widely deployed technology used in networks to prioritise different types of traffic. It could be used in the test cases to prioritise the voice traffic, in which case the results would be completely different and more favourable.
|
5 |
Intellectual capital reporting in New Zealand: refining content analysis as a research methodSteenkamp, Natasja Unknown Date (has links)
This study examines voluntary intellectual capital reporting (ICR) in New Zealand firms' annual reports, with a view to contributing to understanding ICR practice. This study also reflects on content analysis with a view to refining the methodology when applied to investigating ICR.The literature includes widespread claims that intellectual capital (IC) resources are important value drivers and assets, and that IC information should be reported externally. However, complexities relating to identifying IC prevent it from being recognised as an asset under current accounting regulations. Consequently, the traditional financial reporting system is being criticised as out-of-date, giving deficient and irrelevant information, and having lost its value relevance. Numerous scholars have investigated voluntary ICR in several countries, but have presented different results and findings. The literature argues that the results of many ICR studies cannot be meaningfully compared because inconsistent data collection instruments have been applied. To advance ICR research, further refining and developing of the methodology is advocated; problems relating to applying methodological issues need to be resolved. Moreover, to establish consensus about ICR, more research and evidence is needed concerning exactly what and how IC is reported.The 2004 annual reports of the 30 largest (by market capitalisation) New Zealand firms listed on the New Zealand Stock Exchange were analysed. Content analysis was applied to determine what and how IC is reported. Inferences about what IC is communicated were made based on an analysis of the content of texts and visual representations. To determine how IC is reported, voluntary reporting was categorised according to the form, nature and location of the disclosure. Frequencies of mention were recorded. Hence, each incidence of occurrence was coded and counted.This study reflected on content analysis methodology by searching the literature for guidance on how to apply this approach and how to deal with the challenges and problems it poses. The thesis discusses methodological issues that could be applied differently, and hence hinder the replicability and comparability of ICR studies. Moreover, the ICR literature provided limited guidance about how to deal with methodological challenges and problems, and revealed an absence of explicit recording instructions. Therefore, explicating this study's recording instructions should enhance replicability and comparability of future ICR research and hence further refine the methodology.Some results of this content analysis study disconfirm those of prior research: New Zealand firms show high levels of ICR, the most reported IC category is human capital, and the most reported IC item is employees. In line with prior research, this study showed that most ICR is presented in declarative terms. Moreover, more than one-third of New Zealand firms' ICR is disclosed as pictures. This indicates the importance of pictorial information as a means of reporting IC and the need to include graphics when conducting ICR research. This study's findings also indicate a narrative approach, similar to the European notion of story telling, to voluntarily report IC information. This approach suggests that narratives have possible potential for voluntary ICR, as an approach that departs from a measurement and quantification approach.
|
6 |
Intellectual capital reporting in New Zealand: refining content analysis as a research methodSteenkamp, Natasja Unknown Date (has links)
This study examines voluntary intellectual capital reporting (ICR) in New Zealand firms' annual reports, with a view to contributing to understanding ICR practice. This study also reflects on content analysis with a view to refining the methodology when applied to investigating ICR.The literature includes widespread claims that intellectual capital (IC) resources are important value drivers and assets, and that IC information should be reported externally. However, complexities relating to identifying IC prevent it from being recognised as an asset under current accounting regulations. Consequently, the traditional financial reporting system is being criticised as out-of-date, giving deficient and irrelevant information, and having lost its value relevance. Numerous scholars have investigated voluntary ICR in several countries, but have presented different results and findings. The literature argues that the results of many ICR studies cannot be meaningfully compared because inconsistent data collection instruments have been applied. To advance ICR research, further refining and developing of the methodology is advocated; problems relating to applying methodological issues need to be resolved. Moreover, to establish consensus about ICR, more research and evidence is needed concerning exactly what and how IC is reported.The 2004 annual reports of the 30 largest (by market capitalisation) New Zealand firms listed on the New Zealand Stock Exchange were analysed. Content analysis was applied to determine what and how IC is reported. Inferences about what IC is communicated were made based on an analysis of the content of texts and visual representations. To determine how IC is reported, voluntary reporting was categorised according to the form, nature and location of the disclosure. Frequencies of mention were recorded. Hence, each incidence of occurrence was coded and counted.This study reflected on content analysis methodology by searching the literature for guidance on how to apply this approach and how to deal with the challenges and problems it poses. The thesis discusses methodological issues that could be applied differently, and hence hinder the replicability and comparability of ICR studies. Moreover, the ICR literature provided limited guidance about how to deal with methodological challenges and problems, and revealed an absence of explicit recording instructions. Therefore, explicating this study's recording instructions should enhance replicability and comparability of future ICR research and hence further refine the methodology.Some results of this content analysis study disconfirm those of prior research: New Zealand firms show high levels of ICR, the most reported IC category is human capital, and the most reported IC item is employees. In line with prior research, this study showed that most ICR is presented in declarative terms. Moreover, more than one-third of New Zealand firms' ICR is disclosed as pictures. This indicates the importance of pictorial information as a means of reporting IC and the need to include graphics when conducting ICR research. This study's findings also indicate a narrative approach, similar to the European notion of story telling, to voluntarily report IC information. This approach suggests that narratives have possible potential for voluntary ICR, as an approach that departs from a measurement and quantification approach.
|
7 |
FOLE: A Conceptual Framework for Elasticity Performance Analysis in Cloud Computing Environments / FOLE: Um Framework Conceitual para AvaliaÃÃo de Desempenho da Elasticidade em Ambientes de ComputaÃÃo em NuvemEmanuel Ferreira Coutinho 03 November 2014 (has links)
Currently, many customers and providers are using resources of Cloud Computing environments,such as processing and storage, for their applications and services. Through ease of use, based on the pay per use model, it is natural that the number of users and their workloads also grow. As a result, providers should expand their resources and maintain the agreed level of quality for customers, otherwise breaks the Service Level Agreement (SLA) and the resulting penalties. With the increase in computational resources usage, a key feature of Cloud Computing
has become quite attractive: the elasticity. Elasticity can be defined as how a computational cloud adapts to variations in its workload through resources provisioning and deprovisioning.
Due to limited availability of information regarding configuration of the experiments, in general is not trivial to implement elasticity concepts, much less apply them in cloud environments.
Furthermore, the way of measuring cloud elasticity is not obvious, and there is not yet a standard
for this task. Moreover, its evaluation could be performed in different ways due to many
technologies and strategies for providing cloud elasticity. A common aspect of elasticity performance
analysis is the use of environmental resources, such as CPU and memory, and even without a specific metric, to allow an indirectly assess of elasticity. In this context, this work
proposes FOLE, a conceptual framework for conducting performance analysis of elasticity in Cloud Computing environments in a systematic, flexible and reproducible way. To support the framework, we proposed a set of specific metrics for elasticity and metrics for its indirect measurement.
For the measurement of elasticity in Cloud Computing, we proposed metrics based on concepts of Physics, such as strain and stress, and Microeconomics, such as Price Elasticity of Demand. Additionally, we also proposed metrics based on resources allocation and deallocation
operation times, and used resources, to support the measurement of elasticity. For verification and validation of the proposal, we performed two experiments, one in a private cloud and other in a hybrid cloud, using microbenchmarks and a classic scientific application, through a designed infrastructure based on concepts of Autonomic Computing. Through these experiments,
FOLE had validated their activities, allowing the systematization of a elasticity performance
analysis. The results show it is possible to assess the elasticity of a Cloud Computing environment
using specific metrics based on other areas of knowledge, and also complemented by metrics related to time and resources operations satisfactorily. / Atualmente muitos clientes e provedores estÃo utilizando recursos de ambientes de ComputaÃÃo em Nuvem, tais como processamento e armazenamento, para suas aplicaÃÃes e serviÃos. Devido à facilidade de utilizaÃÃo, baseada no modelo de pagamento por uso, à natural que a quantidade
de usuÃrios e suas respectivas cargas de trabalho tambÃm cresÃam. Como consequÃncia, os provedores devem ampliar seus recursos e manter o nÃvel de qualidade acordado com os clientes, sob pena de quebras do Service Level Agreement (SLA) e consequentes multas. Com o aumento na utilizaÃÃo de recursos computacionais, uma das caracterÃsticas principais da ComputaÃÃo em Nuvem tem se tornado bastante atrativa: a elasticidade. A elasticidade pode ser definida como o
quanto uma nuvem computacional se adapta a variaÃÃes na sua carga de trabalho atravÃs do provisionamento e desprovisionamento de recursos. Devido à pouca disponibilidade de informaÃÃo em relaÃÃo à configuraÃÃo dos experimentos, em geral nÃo à trivial implementar conceitos de elasticidade, muito menos aplicÃ-los em ambientes de nuvens computacionais. AlÃm disso, a maneira de se medir a elasticidade nÃo à tÃo Ãbvia, e bastante variada, nÃo havendo ainda uma padronizaÃÃo para esta tarefa, e sua avaliaÃÃo pode ser executada de diferentes maneiras devido Ãs diversas tecnologias e estratÃgias para o provimento da elasticidade. Um aspecto comum na
avaliaÃÃo de desempenho da elasticidade à a utilizaÃÃo de recursos do ambiente, como CPU e memÃria, e mesmo sem ter uma mÃtrica especÃfica para a elasticidade, à possÃvel se obter uma avaliaÃÃo indireta. Nesse contexto, este trabalho propÃe o FOLE, um framework conceitual para a realizaÃÃo de anÃlise de desempenho da elasticidade em nuvens computacionais de maneira sistemÃtica, flexÃvel e reproduzÃvel. Para apoiar o framework, mÃtricas especÃficas para a elasticidade
e mÃtricas para sua mediÃÃo indireta foram propostas. Para a mediÃÃo da elasticidade em ComputaÃÃo em Nuvem, propomos mÃtricas baseadas em conceitos da FÃsica, como tensÃo
e estresse, e da Microeconomia, como Elasticidade do PreÃo da Demanda. Adicionalmente, mÃtricas baseadas em tempos de operaÃÃes de alocaÃÃo e desalocaÃÃo de recursos, e na utilizaÃÃo desses recursos foram propostas para apoiar a mediÃÃo da elasticidade. Para verificaÃÃo
e validaÃÃo da proposta, dois estudos de caso foram realizados, um em uma nuvem privada e outro em uma nuvem hÃbrida, com experimentos projetados utilizando microbenchmarks e
uma aplicaÃÃo cientÃfica clÃssica, executados sobre uma infraestrutura baseada em conceitos de ComputaÃÃo AutonÃmica. Por meio desses experimentos, o FOLE foi validado em suas atividades, permitindo a sistematizaÃÃo de uma anÃlise de desempenho da elasticidade. Os resultados mostram que à possÃvel avaliar a elasticidade de um ambiente de ComputaÃÃo em Nuvem por meio de mÃtricas especÃficas baseadas em conceitos de outras Ãreas de conhecimento, e tambÃm
complementada por mÃtricas relacionadas a tempos de operaÃÃes e recursos de maneira satisfatÃria.
|
8 |
Inovace technického systému s patentovou rešerší / System engineering innovation with patent searchMrkvan, Pavel January 2009 (has links)
The diploma thesis deals with engineering system innovation using TRIZ methodology. This methodology uses algorithmic approach to problem solving and divides innovation into two parts, the analysis and synthesis part. The analytical part consists of steps to help find the problem and its essence, by contrast, offers a part of synthetic procedures to deal with them. The general aim is the utilization of steps and procedures contained in the analysis and synthesis part of the TRIZ methodology to a specific engineering system innovation.
|
9 |
Dynamic Analysis of Substructures with Account of Altered Restraint When Tested in IsolationAmid, Ramin 04 1900 (has links)
The objective of this research is to simulate the response of an isolated substructure such that the response of the substructure in isolation would be the same as the substructure within the structure. Generally, the behaviour of an isolated subsystem (substructure) subjected to dynamic loading is different than the behaviour of the same substructure within a system (structure). This is primarily caused by the boundary conditions that are imposed on the substructure from the surrounding subsystem in the entire structure. A new systematic approach (methodology) is developed for performing impact analysis on the isolated substructure. The developed technique is fundamentally based on enforcing the mode shapes around the boundary of the substructure in the full structure to be similar to the mode shapes of the isolated substructure. This is achieved by providing a consistent adjustment to the loading conditions (impact velocity and mass) to account for the loss of restraint at the interface with the full structure. Another important aspect of this research is experimental validation of proposed method. This method allows the experimental testing of an isolated substructure since the testing is performed by impacting the isolated substructure with an appropriate mass and velocity. In the finite element analysis, the structure is analyzed, and then the isolated substructure simulation is performed using the developed technique. The results obtained from the numerical simulations, for both the substructure in situ and the substructure in isolation, are compared and found to be in good agreement. For instance, the effective plastic strains, kinetic and internal energies for the substructure within the structure and the substructure in isolation range from 7% to 12% discrepancies between two analyses.
The numerical simulations of a full structure are verified by performing a series of
experimental impact tests on the full structure. Finally, the experimental applicability of
the technique is studied and its results are validated with FE simulation of substructure in
isolation. This problem of experimentally testing an isolated substructure had previously
not been addressed. The comparisons of FE simulation and experimental testing are made
based on the deformed geometries, out-of-plane deflections and accelerometer readings.
For example, the out-of-plane deformations from the FE analysis and the experimental
test were determined to be within 7% to 9%. The experimental validation and numerical
simulations indicates the technique is reliable, repeatable and can predict dynamic
response of the substructures when tested in isolation. / Thesis / Doctor of Philosophy (PhD)
|
10 |
Freedom of female sexuality in Calixthe Beyala's C'est le soleil qui m'a brûlée: a critical analysis in translationChomga, Annick Vanessa Magne January 2016 (has links)
A research report submitted to the Faculty of Humanities,
University of the Witwatersrand, Johannesburg,
in partial fulfilment of the requirements for the degree of
Master of Arts in Translation
Johannesburg,
March 2016 / The study provides a comparative and contrastive analysis of Calixthe Beyala’s novel, C'est le soleil qui m'a brûlée, and its translation, The Sun Hath Looked upon Me, by Marjolijn de Jager focusing on textual, paratextual and metatextual elements of these two texts. The analysis shows how the translator dealt with the disruptive stylistic effects of a postcolonial text and the themes around which the novel is centred. Problems and solutions related to postcolonial translations and relevant theories are approached in the analysis. The textual analysis is done using Gérard Genette’s (1997) model of analysis of the elements of the paratext and Vinay and Darbelnet’s (1995) model of comparative analysis of French and English. / GR2017
|
Page generated in 0.073 seconds