31 |
Interação genótipo x ambiente em soja com ênfase na estratificação ambiental para a região central do Brasil / Genotype by environment interaction in soybean with emphasis in the environmental stratification for central region of BrazilBranquinho, Rodrigo Gomes 19 December 2011 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2014-08-26T18:59:21Z
No. of bitstreams: 2
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Branquinho (2011).pdf: 1481959 bytes, checksum: ddf79d2fa9222fdebd9ddefc24d9cc18 (MD5) / Made available in DSpace on 2014-08-26T18:59:22Z (GMT). No. of bitstreams: 2
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Branquinho (2011).pdf: 1481959 bytes, checksum: ddf79d2fa9222fdebd9ddefc24d9cc18 (MD5)
Previous issue date: 2011-12-19 / Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq / The objective of this study was to establish a consistent environmental
stratification for the region of soybean cropping in Central Brazil, based on genotype by
environment (GE) interaction analysis. For this, yield data from variety trials conducted by
Embrapa Cerrados in partnership with others Brazilian institutions, during seven growing
seasons (2002/03 to 2008/09), were used. The study covered six experimental sets that
were related to the genotypes of three maturity groups (early, medium and late), and two
commercial groups (soybean conventional and transgenic RR), totaling 559 trials analyzed.
The statistical treatment of data was performed in two stages: first, analyses of variance
were performed for each experiment, from which the estimates of treatment mean
(combination of genotype and environment) were obtained. In the second stage the joint
and GE interaction analyses were performed. Thus, the yield mean of each genotype in
each environment were submitted to the AMMI analysis (Additive Main effects and
Multiplicative Interaction model), that led to choose a model with only one principal
component (AMMI1). As result of this analysis, the genotypes and environments were
jointly represented in a scatter plot called biplot (graph that display the rows and columns
of a matrix; in this case, genotypes and environments are marginal in this table). To stratify
the target region, the approach of winner genotypes (Gauch & Zobel, 1997; Crop Sci. 37:
311-326) was used. In this approach each stratum is composed by locations that shared a
same winner genotype (one that is the higher yielding mean ranking of a location). In the
AMMI1 biplot, the boundaries of each stratum were identified by horizontal lines drawn
from the ordinate points (scores) corresponding to the environment of transition between
two strata, which are characterized by their winner genotypes. With this information, the
environmental strata were established for each growing year and experimental set. The
maturity groups of assessed lines determined the environmental stratification obtained.
Thus, the following locations were grouped to other localities, presenting a characteristic
of redundancy: a) early maturity group (seven strata): (Campo Novo do Parecis, Maracajú,
São Miguel do Araguaia, Tangará da Serra); (Conquista, Nuporanga, Sidrolândia, Sorriso);
(Cristalina, Iraí, Sacramento); (Montividiu, Sonora, Tapurah); (Capinópolis, Senador
Canedo); (Guaíra, Morro Agudo); and (Lucas do Rio Verde, Sapezal); b) medium maturity
group (four strata): (Anápolis, Montividiu, Tangará da Serra); (Barreiras, Campo Novo do
Parecis, Uberaba-Chapadões); (Chapadão do Sul, Conquista, Maracajú, Sonora); and (São
Gabriel, Sorriso, Uberaba-Epamig); c) late maturity group (five strata): (Campo Novo do
Parecis, Planaltina, Senador Canedo, Tapurah); (Iraí, Sacramento, Sonora); (Lucas do Rio
Verde, Sorriso); (Goiatuba, Tangará da Serra); and (Barreiras, São Desidério). Were also
identified key-locations to conduct the trials in the final stage of genotypic evaluation
(advanced variety trials): a) early maturity group: Anápolis, Barretos, Campos de Júlio,
Capinópolis, Chapadão do Céu, Chapadão do Sul, Goiatuba, Igarapava, Jataí, Luziânia,
Morro Agudo, Planaltina, Primavera do Leste, Sacramento, São Gabriel do Oeste, São
Miguel do Araguaia, Sapezal, Sidrolândia, Sonora, Uberaba-Chapadões, Uberaba-Epamig
e Unaí;b) medium maturity group: Barreiras, Barretos, Campo Alegre, Campos de Júlio,Capinópolis, Chapadão do Céu, Chapadão do Sul, Cristalina, Goiatuba, Iraí, Jataí, Lucas
do Rio Verde, Luziânia, Montividiu, Perolândia, Planaltina, Primavera do Leste, Rio
Verde, Sacramento, São Desidério, Senador Canedo, Sorriso e Unaí; c) late maturity
group: Anápolis, Campo Alegre, Campo Novo do Parecis, Campos de Júlio, Capinópolis,
Chapadão do Céu, Chapadão do Sul, Cristalina, Goiatuba, Jataí, Luziânia, Montividiu,
Primavera do Leste, Rio Verde, São Desidério, São Gabriel do Oeste, Sonora, Sorriso,
Uberaba-Chapadões, Uberaba-Epamig e Unaí. Finally, among the locations recommended
for the network of advanced trials, one was also appointed as key-location to conduct the
initial stages of genotypes assessment in each maturity group. The locations Campos de
Júlio (to early group), Rio Verde (medium and late groups) were in order indicated because
resulted the best rankings of the winner genotypes through the target region. / O objetivo deste estudo foi estabelecer uma estratificação ambiental
consistente para a região de cultivo comercial da soja, no Brasil Central, a partir de análise
da interação entre genótipos e ambientes (GxA). Para isso, foram utilizados dados de
produtividade de grãos, provenientes de ensaios de Valor de Cultivo e Uso (VCU)
conduzidos pela Embrapa Cerrados, em parceria com outras instituições de pesquisa na
região, durante sete anos agrícolas (2002/03 a 2008/09). O estudo envolveu seis conjuntos
experimentais, correspondentes aos genótipos de três grupos de maturação (precoce, médio
e tardio) e dois grupos comerciais (soja convencional e transgênica RR), totalizando 559
ensaios analisados. O tratamento estatístico dos dados foi feito em duas etapas: na
primeira, foram realizadas análises de variância para cada experimento; e, a partir disto,
estimaram-se as médias dos tratamentos (combinação entre genótipos e ambientes). A
segunda etapa correspondeu às análises conjuntas da variação. Nessa etapa, as médias de
produtividade de cada genótipo em cada ambiente foram submetidas à análise AMMI
(Additive Main Effects and Multiplicative Interaction Model); e, neste caso, o modelo com
apenas um eixo principal (AMMI1) foi o escolhido. Por último, os genótipos e os
ambientes foram representados de forma conjunta em gráfico de dispersão denominado
biplot (gráfico que representa as linhas e as colunas de uma matriz; neste caso, genótipos e
ambientes estão nas marginais dessa tabela). Para a estratificação da região alvo, foi
utilizada a abordagem de genótipos vencedores (Gauch & Zobel, 1997; Crop Sci. 37: 311-
326). Neste método, cada estrato é formado pelos locais que compartilham um mesmo
genótipo vencedor (aquele que lidera a classificação de produtividades médias num dado
local). No biplot AMMI1, os limites de cada estrato foram identificados por linhas
horizontais, traçadas a partir dos pontos (escores) de ordenadas correspondentes aos
ambientes de transição entre dois estratos, os quais são caracterizados pelos respectivos
genótipos vencedores. De posse dessas informações, os estratos ambientais foram
determinados para cada ano agrícola e conjunto experimental. O zoneamento ambiental
ficou condicionado ao grupo de maturação das linhagens avaliadas. Assim, os seguintes
locais agruparam-se a outras localidades, apresentando, portanto, característica de
redundância: a) ciclo precoce (sete estratos): (Campo Novo do Parecis, Maracajú, São
Miguel do Araguaia, Tangará da Serra); (Conquista, Nuporanga, Sidrolândia, Sorriso);
(Cristalina, Irai, Sacramento); (Montividiu, Sonora, Tapurah); (Capinópolis, Senador
Canedo); (Guaíra, Morro Agudo); e (Lucas do Rio Verde, Sapezal); b) ciclo médio (quatro
estratos): (Anápolis, Montividiu, Tangará da Serra); (Barreiras, Campo Novo do Parecis,
Uberaba-Chapadões); (Chapadão do Sul, Conquista, Maracajú, Sonora); e (São Gabriel,
Sorriso, Uberaba-Epamig); c) ciclo tardio (cinco estratos): (Campo Novo do Parecis,
Planaltina, Senador Canedo, Tapurah); (Iraí, Sacramento, Sonora); (Lucas do Rio Verde,
Sorriso); (Goiatuba, Tangará da Serra); e (Barreiras, São Desidério). Foram, ainda,
identificados os locais-chave para a condução dos ensaios na fase final da avaliação
(ensaios de VCU): a) ciclo precoce: Anápolis, Barretos, Campos de Júlio, Capinópolis,
Chapadão do Céu, Chapadão do Sul, Goiatuba, Igarapava, Jataí, Luziânia, Morro Agudo,
Planaltina, Primavera do Leste, Sacramento, São Gabriel do Oeste, São Miguel do
Araguaia, Sapezal, Sidrolândia, Sonora, Uberaba-Chapadões, Uberaba-Epamig e Unaí; b)
ciclo médio: Barreiras, Barretos, Campo Alegre, Campos de Júlio, Capinópolis, Chapadão
do Céu, Chapadão do Sul, Cristalina, Goiatuba, Iraí, Jataí, Lucas do Rio Verde, Luziânia,
Montividiu, Perolândia, Planaltina, Primavera do Leste, Rio Verde, Sacramento, São
Desidério, Senador Canedo, Sorriso e Unaí; c) ciclo tardio: Anápolis, Campo Alegre,
Campo Novo do Parecis, Campos de Júlio, Capinópolis, Chapadão do Céu, Chapadão do
Sul, Cristalina, Goiatuba, Jataí, Luziânia, Montividiu, Primavera do Leste, Rio Verde, São
Desidério, São Gabriel do Oeste, Sonora, Sorriso, Uberaba-Chapadões, Uberaba-Epamig e
Unaí. Por fim, entre os locais recomendados para a rede de ensaios de VCU, em cada
grupo de maturação, indicou-se também um local-chave para a condução das fases iniciais
do processo de avaliação. Os locais Campos de Júlio (para o grupo precoce) e Rio Verde
(grupos médio e tardio) foram, então, indicados por resultarem nas melhores classificações
dos genótipos vencedores ao longo da região alvo do estudo.
|
32 |
A Study In Combinatorial AuctionsBilge, Betul 01 August 2004 (has links) (PDF)
By the emergence of electronic commerce and low transaction costs on the Internet, an interest in the design of new auction mechanisms has been arisen. Recently many researchers in computer science, economics, business, and game theory have presented many valuable studies on the subject of online auctions, and auctions theory.
When faced from a computational perspective, combinatorial auctions are perhaps the most challenging ones. Combinatorial auctions, that is, auctions where bidders can bid on combinations of items, tend to lead to more efficient allocations than traditional auction mechanisms in multi-item multi-unit situations where the agents&rsquo / valuations of the items are not additive. However, determining the winners to maximize the revenue is NP-complete.
In this study, we first analyze the existing approaches for combinatorial auction problem. Based on this analysis, we then choose three different approaches, which are search approach, descending simultaneous auctions approach, and IP (Integer Programming) formulation approach to build our models. The performances of the models are compared using computer simulations, where we model bandwidth allocation system. Finally a combinatorial auction tool is built which can be used for online auctions and e-procurement systems.
|
33 |
Imperfections des processus de choix sociaux : études des conflits électoraux / Imperfections of the processes of social choice : studies of electoral conflictsChauveau, Louis 06 October 2016 (has links)
Cette thèse a pour enjeu de traiter des paradoxes étudiés en théorie du choix social.Le paradoxe d'Ostrogorski sur deux axes programmatiques a été traité, notamment sa probabilité de réalisation par l'ajout d'un critère discriminant sur les axes au moment de réaliser le choix de l'électeur : une formule de calcul exacte a été mise au point pour des valeurs de population finies afin de mesurer son occurrence pour différents effectifs, et une borne maximale émerge autours de 0,085.Parmi, les différentes anomalies étudiées en théorie du choix social affectant le fonctionnement des démocraties, le paradoxe du référendum occupe une place particulière du fait de son observation assez récurrente dans l'histoire électorale récente.L'un des enjeux de cette thèse a été de déterminer une méthode utilisable pour mesurer précisément sa probabilité d'occurrence dans des conditions précises de taille du corps électoral et de découpage.Il a été notamment recherché un moyen de comparer sa fréquence selon le nombre de circonscriptions retenu.Une formule a ainsi été déterminée pour des découpage du corps électoral en 3, 5, 7 et 9 circonscriptions de taille homogène.Un second résultat de la thèse sur le même paradoxe a été d'abolir l'hypothèse d'homogénéité parfaite des effectifs des circonscriptions pour mesurer l'effet de leur variation sur la probabilité de conflit pour un découpage en 3 circonscriptions.Des pistes ultérieures de recherche ont également explorées, en particulier la possibilité d'abolir partiellement l'hypothèse de culture neutre avec un découpage en 3 circonscriptions.Il a également été procédé à un état des lieux des types d'architecture institutionnelle, dont une classification globale en quatre catégories a été établie.Il a été tenté de déterminer leur poids dans les conflits de pouvoirs observés dans certains pays, en ayant notamment recours à des résultats obtenus grâce au paradoxe du référendum. / This thesis has aimed issues to deal with paradoxes studied in social choice theory.The Ostrogorski paradox with two programmatic axes was treated, including its achievement by adding a distinguishing criterion on the axes to realize the voter choice: an exact formula has been developed for a finite population to measure its occurrence for different numbers, and a effective maximum bound has emerged around 0.085.Among the various anomalies studied in social choice theory in the functioning of democracy, the referendum paradox holds a special place because of its fairly recurrent observation in recent electoral history.One of the stake of this thesis was to determine a suitable method to accurately measure its probability of occurrence in precise terms of size of the electorate and cutting.It was particularly sought a way to compare its frequency depending on the number of selected districts.A formula has been determined for cutting the electorate in 3, 5, 7 and 9 homogeneous size constituencies.A second result of the thesis on the same paradox was to relax the perfect homogeneity assumption on the constituencies size to measure the effect of their variation on the likelihood of conflict for a division into 3 districts.Subsequent research directions have also explored the possibility to partially abolish the assumption of impartial culture with a division into three districts.An inventory has been also conducted of the institutional architecture types.A comprehensive four-category classification was established, and we have tried to determine their weight in conflicts of powers observed in some countries,in particular using results deduced from the referendum paradox.
|
34 |
Non-Median and Condorcet-loser Presidents in Latin America: an instability factor / Presidentes no medianos y perdedores de Condorcet en América Latina: un factor de inestabilidadColomer, Josep M. 25 September 2017 (has links)
A favorable condition for good governance is that elected presidents obtain the support of both the median voter and the median legislator. Several electoral rules are evaluated for their results in 111 presidential and 137 congressional elections in 18 Latin American countries during the current democratic periods. The frequency of median voter’s or Condorcet-winner presidents appears to be higher under rules with a second-round runoff than under simple plurality rule. The victory of Condorcet-loser or the most rejected candidate is discarded under majority runoff rule. More than half of democratic presidents have not belonged to the median voter’s party in the presidential or the congressional elections. Many of them have faced wide popular and political opposition and entered into inter-institutional conflict. / Una condición favorable para la gobernabilidad es que el presidente electo obtenga el apoyo tanto del elector mediano como del legislador mediano. Por ello, se evalúan las reglas y resul- tados electorales en 111 elecciones presidenciales y 137 elecciones parlamentarias en 18 países en América Latina durante el actual periodo democrático. La frecuencia de presidentes elegidos por los electores medianos o ganadores parece ser más alta cuando las reglas implican una segunda vuelta electoral. La victoria del perdedor de Condorcet, o el candidato con más anticuerpos en los votantes, queda descartada bajo el sistema de mayoría simple electoral. Más de la mitad de los presidentes electos no pertenecieron al partido del votante mediano en las elecciones presidenciales o congresales. Muchos de esos gobernantes se han enfrentado a una amplia oposiciónpolítica y popular y han ingresado a un conflicto interinstitucional.
|
35 |
Classifying True and Fake Telecommunication Signals With Deep LearningMyrberger, Axel, Von Essen, Benjamin January 2020 (has links)
This project aimed to classified artificiality gener-ated,fake, and authentic,true, telecommunication signals, basedupon their frequency response, using methods from deep learn-ing. Another goal was to accomplish this with the least amountof dimension of data possible. The datasets used contained of anequal amount of measured, provided by Ericsson, and generated,by a WINNER II implementation in Matlab, frequency responses.It was determined that a normalized version of the absolute valueof the complex frequency response was enough information for afeedforward network to do a sufficient classification. To improvethe accuracy of the network we did a hyperparameter search,which allowed us to reach an accuracy of 90 percent on our testdataset. The results show that it is possible for neural networksto differentiate between true and fake telecommunication signalsbased on their frequency response, even if it is hard for a humanto tell the difference. / Målet med det här projektet var att klassificera artificiellt genererade signaler, falska, och riktiga, sanna, telekommunikation signaler med hjälp av signalernas frekvens- svar med djup inlärningsmetoder, deep learning. Ett annat mål med projektet var att klassificera signalerna med minsta möjliga antalet dimensioner av datan. Datasetet som användes bestod av till hälften av uppmät data som Ericsson har tillhandahållit, och till hälften av generad data ifrån en WINNER II modell implementerad i Matlab. En slutsats som kunde dras är att en normaliserad version av beloppet av det komplexa frekvenssvaret innehöll tillräckligt med information för att träna ett feedforward nätverk till att uppnå en hög klassificeringssäkerhet. För att vidare öka tillförlitligheten av nätverket gjordes en hyperparametersökning, detta ökade tillförligheten till 90 procent för testdataseten. Resultaten visar att det är möjligt för neurala nätverk att skilja mellan sanna och falska telekommunikations- signaler baserat på deras frekvenssvar, även om det är svårt för människor att skilja signalerna åt. / Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
|
36 |
The Politics of Conspiracy Theory and Control: Cybernetic Governmentality and the Scripted PoliticalBeckenhauer, Samuel Brian 13 May 2024 (has links)
This study analyzes the politics of contemporary conspiracy theory discourses in the United States. Departing from the predominant methodological individualism that characterizes many contemporary analyses of conspiracy theory, which take the individual subject as the unit to be explained and governed, this study situates the production and proliferation of conspiracy theory discourses in the context of cybernetics and related transformations in politics that have tended to reduce democratic representativeness and increase forms of economic and political inequality. Cybernetics, which is often defined as the science of command and control, offers a series of concepts that facilitate an understanding of how freedom and control have become aligned in the second half of the 20th and early 21st centuries in the United States. I utilize Michel Foucault's governmentality approach to formulate a cybernetic governmentality methodology, which analyzes the governance of subjectivity in and through cybernetic systems of communication. Cybernetics, which seeks to invite the individual subject to realize itself through 'choice' and by way of its imbrication into machinic systems, conceptualizes the subject as a consumer and processor of information. I put forth the notion of the scripted political to analyze a key tension within contemporary U.S. politics, as politics is becoming increasingly uncertain yet also often appears to be strongly controlled by political and economic elites. Conspiracy theory, as a speculative genre of thinking, aims to steer events towards certain political ends. Conspiratorial speculation has become a popular means to connect and reflect on a felt obsolescence or superfluity on the part of the individual subject. To substantiate these arguments, I specifically analyze the discourses of QAnon and Covid-19 conspiracy theories. These discourses express political fantasies that often privilege the idea of a liberal autonomous individual subject. The politics of contemporary conspiracy theory in the United States thus concerns the fact that these conspiratorial discourses seek to perform a form of liberal subjectivity. However, this performance of individual liberal subjectivity is always caught in cybernetic systems of communication, which seek to produce value, harvest data, and maximize the attention of their 'users', thus undermining the potential for any meaningful form of liberal subjectivity. / Doctor of Philosophy / This study analyzes the politics of contemporary conspiracy theory discourses in the United States. Whereas today many scholars approach conspiracy theory as concerning the beliefs of individual subjects, whose thoughts are considered deviant and potentially requiring reform or monitoring, this study engages with conspiracy theory discourses and their conditions of possibility. While many acknowledge that conspiracy theory is a response to a felt loss of control, this notion of control is understood to be only potentially true or valid. Cybernetics, which is often defined as the science of command and control, offers a series of concepts that facilitate an understanding of how freedom and control have become aligned in the second half of the 20th and early 21st centuries in the United States. Cybernetics, which seeks to invite the individual subject to realize itself through 'choice' and by way of its imbrication into machinic and technological systems, conceptualizes the individual subject as a consumer and processor of information. I develop a new notion that I call the scripted political to study a key tension within contemporary U.S. politics, as politics is becoming increasingly uncertain yet also often appears to be strongly controlled by political and economic elites. Conspiracy theory is a speculative genre of thinking that is well-suited to produce social and political meaning in a condition of information saturation characteristic of today's social domain. It does so, among other things, by providing explanations about the operations of what many conspiracy theorists consider to be concentrated forms of power and by attempting to steer events towards certain desirable political ends. However, as a way of producing social and political meaning, conspiracy theory often misses the mark. Yet, despite its frequent factual inconsistencies, conspiratorial discourses and speculations have become popular means to create social connections and to reflect on a sense of obsolescence or superfluity felt by many individual subjects. To support these arguments, I focus on the conspiratorial discourses of and about QAnon and about the Covid-19 pandemic. These discourses express political fantasies that often privilege the idea of a liberal autonomous individual subject. However, I show in this study that fantasies about a re-empowered mode of individual liberal subjectivity are often caught in cybernetic systems of communication, which are more interested in producing economic value, harvesting all sorts of data about individual subjects, and maximizing the attention of their 'users', thus undermining the potential for any return to a meaningful form of liberal subjectivity.
|
37 |
Le citoyen et sa circonscriptionDaoust, Jean-François 08 1900 (has links)
No description available.
|
38 |
Emprego de redes neurais artificiais supervisionadas e n?o supervisionadas no estudo de par?metros reol?gicos de excipientes farmac?uticos s?lidosNavarro, Marco Vin?cius Monteiro 05 February 2014 (has links)
Made available in DSpace on 2014-12-17T14:25:22Z (GMT). No. of bitstreams: 1
MarcoVMN_TESE.pdf: 3982733 bytes, checksum: 381ae79721c75a30e3373fe4487512c7 (MD5)
Previous issue date: 2014-02-05 / In this paper artificial neural network (ANN) based on supervised and unsupervised
algorithms were investigated for use in the study of rheological parameters of solid
pharmaceutical excipients, in order to develop computational tools for manufacturing solid
dosage forms. Among four supervised neural networks investigated, the best learning
performance was achieved by a feedfoward multilayer perceptron whose architectures was
composed by eight neurons in the input layer, sixteen neurons in the hidden layer and one
neuron in the output layer. Learning and predictive performance relative to repose angle was
poor while to Carr index and Hausner ratio (CI and HR, respectively) showed very good
fitting capacity and learning, therefore HR and CI were considered suitable descriptors for the
next stage of development of supervised ANNs. Clustering capacity was evaluated for five
unsupervised strategies. Network based on purely unsupervised competitive strategies, classic
"Winner-Take-All", "Frequency-Sensitive Competitive Learning" and "Rival-Penalize
Competitive Learning" (WTA, FSCL and RPCL, respectively) were able to perform
clustering from database, however this classification was very poor, showing severe
classification errors by grouping data with conflicting properties into the same cluster or even
the same neuron. On the other hand it could not be established what was the criteria adopted
by the neural network for those clustering. Self-Organizing Maps (SOM) and Neural Gas
(NG) networks showed better clustering capacity. Both have recognized the two major
groupings of data corresponding to lactose (LAC) and cellulose (CEL). However, SOM
showed some errors in classify data from minority excipients, magnesium stearate (EMG) ,
talc (TLC) and attapulgite (ATP). NG network in turn performed a very consistent
classification of data and solve the misclassification of SOM, being the most appropriate
network for classifying data of the study. The use of NG network in pharmaceutical
technology was still unpublished. NG therefore has great potential for use in the development
of software for use in automated classification systems of pharmaceutical powders and as a
new tool for mining and clustering data in drug development / Neste trabalho foram estudadas redes neurais artificiais (RNAs) baseadas em algoritmos
supervisionados e n?o supervisionados para emprego no estudo de par?metros reol?gicos de
excipientes farmac?uticos s?lidos, visando desenvolver ferramentas computacionais para o
desenvolvimento de formas farmac?uticas s?lidas. Foram estudadas quatro redes neurais artificiais
supervisionadas e cinco n?o supervisionadas. Todas as RNAs supervisionadas foram baseadas em
arquitetura de rede perceptron multicamada alimentada ? frente (feedfoward MLP). Das cinco RNAs
n?o supervisionadas, tr?s foram baseadas em estrat?gias puramente competitivas, "Winner-Take-
All" cl?ssica, "Frequency-Sensitive Competitive Learning" e "Rival-Penalize Competitive Learning"
(WTA, FSCL e RPCL, respectivamente). As outras duas redes n?o supervisionadas, Self-
Organizing Map e Neural Gas (SOM e NG) foram baseadas estrat?gias competitivo-cooperativas.
O emprego da rede NG em tecnologia farmac?utica ? ainda in?dito e pretende-se avaliar seu
potencial de emprego como nova ferramenta de minera??o e classifica??o de dados no
desenvolvimento de medicamentos. Entre os prot?tipos de RNAs supervisionadas o melhor
desempenho foi conseguido com uma rede de arquitetura composta por 8 neur?nios de entrada, 16
neur?nios escondidos e 1 neur?nio de sa?da. O aprendizado de rede e a capacidade preditiva em
rela??o ao ?ngulo de repouso (α) foi deficiente, e muito boa para o ?ndice de Carr e fator de Hausner
(IC, FH). Por esse motivo IC e FH foram considerados bons descritores para uma pr?xima etapa de
desenvolvimento das RNAs supervisionadas. As redes, WTA, RPCL e FSCL, foram capazes de
estabelecer agrupamentos dentro da massa de dados, por?m apresentaram erros grosseiros de
classifica??o caracterizados pelo agrupamento de dados com propriedades conflitantes, e tamb?m
n?o foi poss?vel estabelecer qual o crit?rio de classifica??o adotado. Tais resultados demonstraram
a inviabilidade pr?tica dessas redes para os sistemas estudados sob nossas condi??es experimentais.
As redes SOM e NG mostraram uma capacidade de classifica??o muito superior ?s RNAs puramente
competitivas. Ambas as redes reconheceram os dois agrupamentos principais de dados
correspondentes ? lactose (LAC) e celulose (CEL). Entretanto a rede som demonstrou defici?ncia
na classifica??o de dados relativos aos excipientes minorit?rios, estearato de magn?sio (EMG), talco
(TLC) e atapulgita (ATP). A rede NG, por sua vez, estabeleceu uma classifica??o muito consistente
dos dados e resolveu o erro de classifica??o apresentados pela rede SOM, mostrando-se a rede mais
adequada para a classifica??o dos dado do presente estudo. A rede Neural Gas, portanto, mostrou-
se promissora para o desenvolvimento de softwares para uso na classifica??o automatizada de
sistemas pulverulentos farmac?uticos
|
39 |
Krav vid val av tredjepartslogistiksaktör : En fallstudie genomförd på AA logistikDahir, Chera, Zildzic, Abdela January 2017 (has links)
Under en lång tidsperiod har transportering av gods förekommit och i dagens samhälle blir det allt mer vanligt att man anlitar en tredjepartslogistik-aktör (TPL-aktör) som sköter hela eller delar av ett företags distribution. Fallföretaget i denna studie är en nyetablerad TPL-aktör som idag inte nått ut till så många kunder som önskat. Fallföretaget erbjuder ett flertal tjänsten inom logistik, som t.ex. lagring, packning, transport, flyggodshantering och säkerhetsrådgivning. Syftet med denna studie är att identifiera och redogöra för vilka krav som är betydelsefulla vid urval av TPL-aktör. För att kunna skapa ett underlag till studien samt besvara studiens syfte har litteraturstudier samt intervjuer genomförts med fallföretaget, tre av deras befintliga kunder samt en potentiell kund. Utifrån vetenskapliga artiklar, litteratur samt webbsidor har en relevant teori tagits fram. Studiens empiriska material som bestod av intervjuer jämfördes och ställdes i en analys, mot den samlade teorin (för att därefter kunna dra en slutsats). Resultatet består av intervjuer med befintliga kunder att samt en intervju med en potentiell kund. I intervjun framkommer det att de mest förekommande och avgörande kraven som ställs vid val av TPL-aktör för kunderna samt fallföretaget är leveranssäkerhet, pris och kundservice. Kund A och den potentiella kunden är de enda som nämner närhet som en betydande faktor. Även informationsdelning är en faktor som nämns att vara betydande för respondenterna. Kund A samt den potentiella kunden ser flexibilitet som ett avgörande krav vid val av TPL-aktör. Alla befintliga kunder samt den potentiella kunden värdesätter leveranssäkerhet väldigt högt. Kund C anser att rätt resurser och rätt egenskaper är ett krav som ställs på TPL-aktören men även den potentiella kunden ser det som en betydande faktor. Man kan dra slutsatsen att de främsta anledningarna till varför företag väljer att outsourcar delar av sin logistikverksamhet beror på att de vill spara pengar och fokusera på den egna kärnverksamheten. De mest förkommande kraven som ställs på en TPL-aktör enligt de kunder som intervjuats, är god leveranssäkerhet, bra pris, bra kundservice, korrekt informationsdelning och god flexibilitet. Rätt resurser och rätt egenskaper som hög pålitlighet är faktorer som specifika för just val av TPL-aktör, inom flyggodslogistik. / Transportation of goods has existed for a long period of time, and in today's society it has become increasingly common to employ a third-party logistics actor (TPL-actor) which manages all or part of a company's distribution. The case company in this study is a newly established TPL-actor which has not yet reached as many customers as desired. The case company offers a number of logistics services, such as storage, packing, transport, handling of flight goods and safety consulting. The purpose of this study is to identify and describe what requirements are important when selecting a TPL actor. A literature study have been conducted. Existing and potential customers of the case company have been interviewed. The study´is empirical material that consisted of interviews was compared and later, in a analysis, put up against the overall study to draw a conclusion. As for the findings, it consists existing customers as well as an interview with a potential customer. It appears in the interview that delivery security, price and customer service are the most occurring and crucial requirements when choosing a TPL-actor. Customer A and the potential customer are the only ones that mentions proximity as a significant factor. Information sharing is also another factor which is significant according to the respondents. Costumer A and the potential customer sees flexibility as a crucial requirement when choosing a TPL-actors as well. All existing customers and the potential customer values delivery security highly. Customer C considers that right resources and right attributes is a requirement placed on the TPL-actor which the potential customer agrees upon. As a conclusion, the main reason to why companies outsource the parts of their logistics management is to save money and focus on the core competence of the business. The most occurring requirements placed on TPL-actors according to the interviewed customers are, good delivery security, good price, good customer service, correct information sharing and good flexibility. Right resources, right attributes as well as high reliability are factors that are specific regarding the selection of TPL-actors, within air goods logistics.
|
40 |
Copyright and culture : a qualitative theoryFraser, Henry January 2018 (has links)
Copyright is conventionally justified as an incentive to produce and disseminate works of authorship. We can justify and theorise copyright more richly, not least because empirical evidence does not support the incentive narrative. Rather than focussing on quantitative matters such as the number of works incentivised and produced, we should consider copyright's qualitative influence on culture. A threshold objection to such an approach is the risk of cultural paternalism. This objection can be overcome. Rather than specifying paternalistic standards of merit for works, we can target the conditions under which their creation and consumption takes place. I argue, firstly, that we should adopt the following high-level principles: (i) that the conditions of creation and consumption of works should be conducive to democratic deliberation (democracy) and (ii) that they should facilitate the development of human capabilities (autonomy). Secondly, I propose that we pursue three mid-level objectives, which are helpful indicia of democracy and autonomy: - a fair and wide distribution of communicative and cultural power (inclusiveness); - diversity in the content and perspectives available to the public (diversity); and - conditions that permit authors and users of works to engage rigorously with the conventions of the media in which they operate (rigour). It is often said that copyright obstructs important qualitative objectives, like freedom of expression, and that we could better pursue these goals by weakening copyright and relying on non-proprietary alternatives. My approach produces a more optimistic, but also more complicated, view of copyright. While copyright's qualitative influence is not optimal, reductions in the strength and scope of copyright sometimes produces conditions and incentive structures that are worse for inclusiveness, diversity and rigour than stronger copyright. For example, both attention and wealth are highly concentrated in networked information economies driven by free sharing of content, and this is bad for diversity or inclusiveness. Online business models, based on surveillance of users' consumption of free works, are corrosive of autonomy and democracy. Merely removing copyright-based restrictions on the sharing of works is not a panacea for copyright's ills. A qualitative theory such as mine equips us to better understand and calibrate more richly the trade-offs involved in copyright policy decisions, and encourages us to treat copyright as part of a broader, qualitatively-oriented information and cultural policy.
|
Page generated in 0.0444 seconds