Spelling suggestions: "subject:"banking"" "subject:"anking""
281 |
Statistical Critical Path Identification and ClassificationPanagiotakopoulos, Georgios 01 May 2011 (has links)
This thesis targets the problem of critical path identification in sub-micron devices. Delays are described using Probability density functions (Pdfs) in order to model the probabilistic nature of the problem. Thus, a deterministic critical path response is not possible. The probability that each path is critical is reported instead. Extensive literature review has being done and presented in detail. Heuristics for accurate critical path calculations are described and results are compared to those from Monte Carlo simulations.
|
282 |
EFFICIENT LEARNING-BASED RECOMMENDATION ALGORITHMS FOR TOP-N TASKS AND TOP-N WORKERS IN LARGE-SCALE CROWDSOURCING SYSTEMSSafran, Mejdl Sultan 01 May 2018 (has links)
A pressing need for efficient personalized recommendations has emerged in crowdsourcing systems. On the one hand, workers confront a flood of tasks, and they often spend too much time to find tasks matching their skills and interests. Thus, workers want effective recommendation of the most suitable tasks with regard to their skills and preferences. On the other hand, requesters sometimes receive results in low-quality completion since a less qualified worker may start working on a task before a better-skilled worker may get hands on. Thus, requesters want reliable recommendation of the best workers for their tasks in terms of workers' qualifications and accountability. The task and worker recommendation problems in crowdsourcing systems have brought up unique characteristics that are not present in traditional recommendation scenarios, i.e., the huge flow of tasks with short lifespans, the importance of workers' capabilities, and the quality of the completed tasks. These unique features make traditional recommendation approaches (mostly developed for e-commerce markets) no longer satisfactory for task and worker recommendation in crowdsourcing systems. In this research, we reveal our insight into the essential difference between the tasks in crowdsourcing systems and the products/items in e-commerce markets, and the difference between buyers' interests in products/items and workers' interests in tasks. Our insight inspires us to bring up categories as a key mediation mechanism between workers and tasks. We propose a two-tier data representation scheme (defining a worker-category suitability score and a worker-task attractiveness score) to support personalized task and worker recommendation. We also extend two optimization methods, namely least mean square error (LMS) and Bayesian personalized rank (BPR) in order to better fit the characteristics of task/worker recommendation in crowdsourcing systems. We then integrate the proposed representation scheme and the extended optimization methods along with the two adapted popular learning models, i.e., matrix factorization and kNN, and result in two lines of top-N recommendation algorithms for crowdsourcing systems: (1) Top-N-Tasks (TNT) recommendation algorithms for discovering the top-N most suitable tasks for a given worker, and (2) Top-N-Workers (TNW) recommendation algorithms for identifying the top-N best workers for a task requester. An extensive experimental study is conducted that validates the effectiveness and efficiency of a broad spectrum of algorithms, accompanied by our analysis and the insights gained.
|
283 |
Implementación de Ore Value ranking en Minera EscondidaRamírez Castillo, Isadora Paz January 2017 (has links)
Ingeniera Civil de Minas / El descenso en las leyes de los materiales en las distintas minas y el aumento de los costos, generan un escenario actual en la minería en que es necesario implementar innovaciones y tecnología para aprovechar de la mejor manera los recursos y aumentar el valor del negocio minero, esto es posible a través de la planificación minera, la cual define la secuencia de extracción y la asignación de los materiales explotados a los distintos destinos. La secuencia de extracción determina para cada periodo de tiempo la cantidad de material que se extraerá y desde que lugar de la mina se explotarán. En cambio la asignación de material define hacia dónde se destinarán los materiales ya explotados, ya sea a destinos de procesamiento (planta concentradora, pilas de lixiviación, etc) o a stocks para su posterior proceso o a botaderos. Actualmente la metodología tradicional de planificación hace que tanto la secuencia de extracción como la asignación de materiales se basen en usar la ley como criterio de decisión, en cambio Ore Value Ranking cuestiona este procedimiento y propone incorporar más información que solo la ley para así tomar decisiones más informadas.
El objetivo de la presente memoria es generar la estandarización y metodología para implementar Ore Value Ranking en Minera Escondida Ltda., para ello se estudiaron dos criterios de decisión (o de corte) los cuales son una ley castigada por la recuperación de cada destino (CuRec/t) y el otro es el beneficio marginal por tonelada (US$/t). Si bien el último criterio incorpora más información, como por ejemplo costo de mina variable en función del tiempo de ciclo de cada bloque a cada destino, también incorpora mayor incertidumbre debido a que considera algunas variables no controladas, como los son el precio y el costo de combustible.
Dentro de los resultados obtenidos en este estudio, destaca que utilizar el criterio CuRec/t en el plan del año fiscal 2018, sin cambiar la secuencia de extracción original, solo asignando mejor a los destinos, significa un aumento de 0.26% en el total de producción de cobre fino y en un incremento de 0.4% del beneficio económico de Minera Escondida. Lamentablemente debido a las limitaciones de los softwares de planificación que utiliza actualmente la empresa, no fue posible generar un plan minero usando el criterio US$/t, pero en un futuro cuando sea factible, se recomienda contrastar el beneficio de este criterio con la incertidumbre que incorpora.
Otro resultado importante fue la definición de las leyes de corte para cada material de los rajos de Minera Escondida usando Ore Value Ranking, esto permitió mostrar que la ley de corte 0.3% de cobre, que actualmente utiliza la empresa, es un valor mucho más alto del que debería ser, significando que 367 Mt de material fueran consideradas lastre en el plan quinquenal, pero realmente eran mineral al tener en cuenta más información. Para ver cuánto de ese tonelaje extra efectivamente puede ser procesado se recomienda generar planes con estas nuevas leyes de corte.
Finalmente se recomienda que para lograr una implementación real de Ore Value Ranking en Minera Escondida es necesario una correcta gestión de cambio, en la cual se muestre los potenciales beneficios de este cambio, creando equipos comprometidos con la estrategia de la empresa y asegurando los elementos tangibles como softwares apropiados y capacitaciones al personal.
|
284 |
PhenoVis : a visual analysis tool to phenological phenomena / PhenoVis : uma ferramenta de análise visual para fenômenos fenológicosLeite, Roger Almeida January 2015 (has links)
Phenology studies recurrent periodic phenomena of plants and their relationship to environmental conditions. Monitoring forest ecosystems using digital cameras allows the study of several phenological events, such as leaf expansion or leaf fall. Since phenological phenomena are cyclic, the comparative analysis of successive years is capable of identifying interesting variation on annual patterns. However, the number of images collected rapidly gets significant since the goal is to compare data from several years. Instead of performing the analysis over images, experts prefer to use derived statistics (such as average values). We propose PhenoVis, a visual analytics tool that provides insightful ways to analyze phenological data. The main idea behind PhenoVis is the Chronological Percentage Maps (CPMs), a visual mapping that offers a summary view of one year of phenological data. CPMs are highly customizable, encoding more information about the images using a pre-defined histogram, a mapping function that translates histogram values into colors, and a normalized stacked bar chart to display the results. PhenoVis supports different color encodings, visual pattern analysis over CPMs, and similarity searches that rank vegetation patterns found at various time periods. Results for datasets comprising data of up to nine consecutive years show that PhenoVis is capable of finding relevant phenological patterns along time. Fenologia estuda os fenômenos recorrentes e periódicos que ocorrem com as plantas. Estes podem vir a ser relacionados com as condições ambientais. O monitoramento de florestas, através de câmeras, permite o estudo de eventos fenológicos como o crescimento e queda de folhas. Uma vez que os fenômenos fenológicos são cíclicos, análises comparativas de anos sucessivos podem identificar variações interessantes no comportamento destes. No entanto, o número de imagens cresce rapidamente para que sejam comparadas lado a lado. PhenoVis é uma ferramenta para análise visual que apresenta formas para analisar dados fenológicos através de comparações estatísticas (preferência dos especialistas) derivadas dos valores dos pixels destas imagens. A principal ideia por trás de PhenoVis são os mapas percentuais cronológicos (CPMs), um mapeamento visual com uma visão resumida de um período de um ano de dados fenológicos. CPMs são personalizáveis e conseguem representar mais informações sobre as imagens do que um gráfico de linha comum. Isto é possível pois o processo envolve o uso de histogramas pré-definidos, um mapeamento que transforma valores em cores e um empilhamento dos mapas de percentagem que visa a criação da CPM. PhenoVis suporta diferentes codificações de cores e análises de padrão visual sobre as CPMs. Pesquisas de similaridade ranqueiam padrões parecidos encontrados nos diferentes anos. Dados de até nove anos consecutivos mostram que PhenoVis é capaz de encontrar padrões fenológicos relevantes ao longo do tempo.
|
285 |
¿Es Posible Mejorar la Rentabilidad de una Estrategia de Inversión en Monedas Basada en Análisis Técnico Utilizando Análisis Fundamental?Mulatti Morales, Carlos Eugenio January 2009 (has links)
En este trabajo se presentan los resultados de desarrollar una estrategia de inversión en
monedas ocupando criterios de análisis técnico y fundamental en conjunto, en contraste
con el enfoque tradicional en el cual el inversionista elige una vertiente de pensamiento por
sobre la otra para tomar sus decisiones.
El análisis técnico presenta una serie de inconvenientes que hacen que su uso sea
cuestionado por algunos inversionistas. Por ejemplo, dado que sólo utiliza información
pasada, en general sus indicadores son débiles para predecir cambios futuros de largo
plazo o estructurales en los precios del activo que se estudia, lo cual sí logran los
correspondientes fundamentales. Del mismo modo, el análisis fundamental muchas veces
pierde su capacidad de predecir movimientos de precios en el corto plazo para economías
que aún no se desarrollan, dada su escasa liquidez y esporádicas oportunidades de
arbitraje.
Para superar los inconvenientes antes mencionados se propone el desarrollo de una
estrategia de inversión basada en indicadores de análisis técnico, con los cuales se crea
un ranking de monedas y se invierte en las que más se espera que renten en el corto plazo.
Posteriormente, los resultados de utilizar dicha estrategia se contrastan con otra en la cual
del universo de monedas se preseleccionan algunas utilizando indicadores fundamentales,
para después aplicar el ranking técnico sobre estas monedas preseleccionadas, obteniendo
una estrategia que entrega un mayor retorno con una razón retorno – riesgo mayor.
El universo de monedas incluidas en el trabajo comprende siete países de economías
emergentes, donde se adecua el uso tanto del análisis fundamental como del técnico, para
un período comprendido entre Enero de 2003 y Diciembre de 2007. Con los indicadores de
ambos tipos de análisis, se estima el tipo de cambio haciendo uso de una ecuación lineal
ad-hoc a cada uno de ellos.
Finalmente se utiliza la desviación porcentual del valor estimado versus el valor real
como puntaje asociado a cada moneda, y se usa este criterio para rankearlas desde la que
se espera que rente más a la que rente menos.
Como resultado principal, se obtuvo que la implementación de un filtro fundamental
previo al uso del análisis técnico permite mejorar la rentabilidad con respecto al uso del
análisis técnico solamente, y que en términos de retorno – riesgo logra obtener una mayor
cantidad de retorno por cada unidad de riesgo asumida.
|
286 |
A Computational Approach to Relative Image AestheticsJanuary 2016 (has links)
abstract: Computational visual aesthetics has recently become an active research area. Existing state-of-art methods formulate this as a binary classification task where a given image is predicted to be beautiful or not. In many applications such as image retrieval and enhancement, it is more important to rank images based on their aesthetic quality instead of binary-categorizing them. Furthermore, in such applications, it may be possible that all images belong to the same category. Hence determining the aesthetic ranking of the images is more appropriate. To this end, a novel problem of ranking images with respect to their aesthetic quality is formulated in this work. A new data-set of image pairs with relative labels is constructed by carefully selecting images from the popular AVA data-set. Unlike in aesthetics classification, there is no single threshold which would determine the ranking order of the images across the entire data-set.
This problem is attempted using a deep neural network based approach that is trained on image pairs by incorporating principles from relative learning. Results show that such relative training procedure allows the network to rank the images with a higher accuracy than a state-of-art network trained on the same set of images using binary labels. Further analyzing the results show that training a model using the image pairs learnt better aesthetic features than training on same number of individual binary labelled images.
Additionally, an attempt is made at enhancing the performance of the system by incorporating saliency related information. Given an image, humans might fixate their vision on particular parts of the image, which they might be subconsciously intrigued to. I therefore tried to utilize the saliency information both stand-alone as well as in combination with the global and local aesthetic features by performing two separate sets of experiments. In both the cases, a standard saliency model is chosen and the generated saliency maps are convoluted with the images prior to passing them to the network, thus giving higher importance to the salient regions as compared to the remaining. Thus generated saliency-images are either used independently or along with the global and the local features to train the network. Empirical results show that the saliency related aesthetic features might already be learnt by the network as a sub-set of the global features from automatic feature extraction, thus proving the redundancy of the additional saliency module. / Dissertation/Thesis / Masters Thesis Computer Science 2016
|
287 |
Efficient Node Proximity and Node Significance Computations in GraphsJanuary 2017 (has links)
abstract: Node proximity measures are commonly used for quantifying how nearby or otherwise related to two or more nodes in a graph are. Node significance measures are mainly used to find how much nodes are important in a graph. The measures of node proximity/significance have been highly effective in many predictions and applications. Despite their effectiveness, however, there are various shortcomings. One such shortcoming is a scalability problem due to their high computation costs on large size graphs and another problem on the measures is low accuracy when the significance of node and its degree in the graph are not related. The other problem is that their effectiveness is less when information for a graph is uncertain. For an uncertain graph, they require exponential computation costs to calculate ranking scores with considering all possible worlds.
In this thesis, I first introduce Locality-sensitive, Re-use promoting, approximate Personalized PageRank (LR-PPR) which is an approximate personalized PageRank calculating node rankings for the locality information for seeds without calculating the entire graph and reusing the precomputed locality information for different locality combinations. For the identification of locality information, I present Impact Neighborhood Indexing (INI) to find impact neighborhoods with nodes' fingerprints propagation on the network. For the accuracy challenge, I introduce Degree Decoupled PageRank (D2PR) technique to improve the effectiveness of PageRank based knowledge discovery, especially considering the significance of neighbors and degree of a given node. To tackle the uncertain challenge, I introduce Uncertain Personalized PageRank (UPPR) to approximately compute personalized PageRank values on uncertainties of edge existence and Interval Personalized PageRank with Integration (IPPR-I) and Interval Personalized PageRank with Mean (IPPR-M) to compute ranking scores for the case when uncertainty exists on edge weights as interval values. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2017
|
288 |
New methods for multi-objective learning / Nouvelles méthodes pour l’apprentissage multi-objectifsPuthiya Parambath, Shameem Ahamed 16 December 2016 (has links)
Les problèmes multi-objectifs se posent dans plusieurs scénarios réels dans le monde où on doit trouver une solution optimale qui soit un compromis entre les différents objectifs en compétition. Dans cette thèse, on étudie et on propose des algorithmes pour traiter les problèmes des machines d’apprentissage multi-objectif. On étudie deux méthodes d’apprentissage multi-objectif en détail. Dans la première méthode, on étudie le problème de trouver le classifieur optimal pour réaliser des mesures de performances multivariées. Dans la seconde méthode, on étudie le problème de classer des informations diverses dans les missions de recherche des informations. / Multi-objective problems arise in many real world scenarios where one has to find an optimal solution considering the trade-off between different competing objectives. Typical examples of multi-objective problems arise in classification, information retrieval, dictionary learning, online learning etc. In this thesis, we study and propose algorithms for multi-objective machine learning problems. We give many interesting examples of multi-objective learning problems which are actively persuaded by the research community to motivate our work. Majority of the state of the art algorithms proposed for multi-objective learning comes under what is called “scalarization method”, an efficient algorithm for solving multi-objective optimization problems. Having motivated our work, we study two multi-objective learning tasks in detail. In the first task, we study the problem of finding the optimal classifier for multivariate performance measures. The problem is studied very actively and recent papers have proposed many algorithms in different classification settings. We study the problem as finding an optimal trade-off between different classification errors, and propose an algorithm based on cost-sensitive classification. In the second task, we study the problem of diverse ranking in information retrieval tasks, in particular recommender systems. We propose an algorithm for diverse ranking making use of the domain specific information, and formulating the problem as a submodular maximization problem for coverage maximization in a weighted similarity graph. Finally, we conclude that scalarization based algorithms works well for multi-objective learning problems. But when considering algorithms for multi-objective learning problems, scalarization need not be the “to go” approach. It is very important to consider the domain specific information and objective functions. We end this thesis by proposing some of the immediate future work, which are currently being experimented, and some of the short term future work which we plan to carry out.
|
289 |
Identificação de autoridades em tópicos na blogosfera brasileira usando comentários como relacionamento / Topical authority identification in the brazilian blogosphere using comments as relationshipsSantos, Henrique Dias Pereira dos January 2013 (has links)
Com o aumento dos usuários acessando a internet no Brasil, cresce a quantidade de conteúdo produzido por brasileiros. Assim se torna importante classificar os melhores autores para que se tenha mais confiança nos textos lidos. Nesse sentido, esta dissertação faz um estudo sobre a descoberta de autoridades em tópicos na blogosfera brasileira. O escopo de estudo e análise é a plataforma de publicação de blogs, Blogspot, sobre os blogueiros que se identificam como brasileiros. Para tanto, foram coletados nove milhões de postagens do ano de 2012 e considerados os comentários como fonte de relacionamento entre os blogueiros para gerar uma rede social. Essa rede foi usada para experimentos do algoritmo de identificação de autoridades em tópicos. O algoritmo utilizado como base é o Topic PageRank, separando os diversos tópicos da blogosfera pelas tags que os usuários definem em suas postagens e posteriormente construindo a lista das autoridades em tais tópicos. Experimentos realizados demonstram que o método proposto resulta em melhor ranqueamento que o algoritmo original do PageRank. Cabe salientar que foi feita uma caracterização dos dados coletados por um questionário aplicado a quatro mil autores. / With the intesification of users accessing the Internet in Brazil, the amount of content produced by Brazilians increases. Thus, it becomes important to classify the best authors to have more confidence in the texts read. In this sense, this work presents a study on subject of topic authorities discovery in the Brazilian blogosphere. The scope of the study is the Blogspot platform, focusing on bloggers who identify themselves as Brazilians. To this end, we collected nine millions posts in the year of 2012 and considered the comments as a source of relationship between bloggers to generate a social network. This network was used for performing experiments considering the proposed approach to identify topic authorities. The algorithm used is based on the Topic PageRank, which can separate the different blogosphere’s topics by tags that users use on their posts, and then building the list of authorities on such topics. The experiments conducted show that the proposed approach results in better ranking than the original PageRank algorithm. We also characterize the collected database with a survey of over four thousand authors.
|
290 |
PhenoVis : a visual analysis tool to phenological phenomena / PhenoVis : uma ferramenta de análise visual para fenômenos fenológicosLeite, Roger Almeida January 2015 (has links)
Phenology studies recurrent periodic phenomena of plants and their relationship to environmental conditions. Monitoring forest ecosystems using digital cameras allows the study of several phenological events, such as leaf expansion or leaf fall. Since phenological phenomena are cyclic, the comparative analysis of successive years is capable of identifying interesting variation on annual patterns. However, the number of images collected rapidly gets significant since the goal is to compare data from several years. Instead of performing the analysis over images, experts prefer to use derived statistics (such as average values). We propose PhenoVis, a visual analytics tool that provides insightful ways to analyze phenological data. The main idea behind PhenoVis is the Chronological Percentage Maps (CPMs), a visual mapping that offers a summary view of one year of phenological data. CPMs are highly customizable, encoding more information about the images using a pre-defined histogram, a mapping function that translates histogram values into colors, and a normalized stacked bar chart to display the results. PhenoVis supports different color encodings, visual pattern analysis over CPMs, and similarity searches that rank vegetation patterns found at various time periods. Results for datasets comprising data of up to nine consecutive years show that PhenoVis is capable of finding relevant phenological patterns along time. Fenologia estuda os fenômenos recorrentes e periódicos que ocorrem com as plantas. Estes podem vir a ser relacionados com as condições ambientais. O monitoramento de florestas, através de câmeras, permite o estudo de eventos fenológicos como o crescimento e queda de folhas. Uma vez que os fenômenos fenológicos são cíclicos, análises comparativas de anos sucessivos podem identificar variações interessantes no comportamento destes. No entanto, o número de imagens cresce rapidamente para que sejam comparadas lado a lado. PhenoVis é uma ferramenta para análise visual que apresenta formas para analisar dados fenológicos através de comparações estatísticas (preferência dos especialistas) derivadas dos valores dos pixels destas imagens. A principal ideia por trás de PhenoVis são os mapas percentuais cronológicos (CPMs), um mapeamento visual com uma visão resumida de um período de um ano de dados fenológicos. CPMs são personalizáveis e conseguem representar mais informações sobre as imagens do que um gráfico de linha comum. Isto é possível pois o processo envolve o uso de histogramas pré-definidos, um mapeamento que transforma valores em cores e um empilhamento dos mapas de percentagem que visa a criação da CPM. PhenoVis suporta diferentes codificações de cores e análises de padrão visual sobre as CPMs. Pesquisas de similaridade ranqueiam padrões parecidos encontrados nos diferentes anos. Dados de até nove anos consecutivos mostram que PhenoVis é capaz de encontrar padrões fenológicos relevantes ao longo do tempo.
|
Page generated in 0.0469 seconds