Spelling suggestions: "subject:"[een] COMPUTER SYSTEMS"" "subject:"[enn] COMPUTER SYSTEMS""
311 |
Result size calculation for Facebook's GraphQL query language / Beräkning av resultatstorlek för Facebooks GraphQL query languageTim, Andersson January 2018 (has links)
GraphQL is a query language for web APIs and a service for carrying out server requeststo interact with the data from the API. Research show that even for simple GraphQL queriesboth the size of the response object and the execution times to retrieve these objects may beprohibitively large and that current implementations of the language suffers from this issue.This thesis explores the implementation of an algorithm for calculating the exact size of theresponse object from a GraphQL query, and the performance based evaluation of the implementation. A proof of concept of a server using the implementation of the algorithm and subsequent tests of thecalculation times for particularly problematic queries sent to the server, show that the implementationscales well and could serve as a way to stop these queries from executing.
|
312 |
Applying design patterns and testing it in JavaScriptSkoko, Dennis January 2018 (has links)
No description available.
|
313 |
Motor control under strong vibrationsPersson, Tobias, Fredlund, Andreas January 2018 (has links)
No description available.
|
314 |
Ambiente data cleaning: suporte extensível, semântico e automático para análise e transformação de dadosJardini, Toni [UNESP] 30 November 2012 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:29:41Z (GMT). No. of bitstreams: 0
Previous issue date: 2012-11-30Bitstream added on 2014-06-13T19:39:00Z : No. of bitstreams: 1
jardini_t_me_sjrp.pdf: 3132731 bytes, checksum: f7d17c296de5c8631819f117979b411d (MD5) / Um dos grandes desa os e di culdades para se obter conhecimento de fontes de dados e garantir consistência e a não duplicidade das informações armazenadas. Diversas técnicas e algoritmos têm sido propostos para minimizar o custoso trabalho de permitir que os dados sejam analisados e corrigidos. Porém, ainda há outras vertentes essenciais para se obter sucesso no processo de limpeza de dados, e envolvem diversas areas tecnológicas: desempenho computacional, semântica e autonomia do processo. Diante desse cenário, foi desenvolvido um ambiente data cleaningque contempla uma coleção de ferramentas de suporte a análise e transformação de dados de forma automática, extensível, com suporte semântico e aprendizado, independente de idioma. O objetivo deste trabalho e propor um ambiente cujas contribuições cobrem problemas ainda pouco explorados pela comunidade científica area de limpeza de dados como semântica e autonomia na execução da limpeza e possui, dentre seus objetivos, diminuir a interação do usuário no processo de análise e correção de inconsistências e duplicidades. Dentre as contribuições do ambiente desenvolvido, a eficácia se mostras significativa, cobrindo aproximadamente 90% do total de inconsistências presentes na base de dados, com percentual de casos de falsos-positivos 0% sem necessidade da interação do usuário / One of the great challenges and di culties to obtain knowledge from data sources is to ensure consistency and non-duplication of stored data. Many techniques and algorithms have been proposed to minimize the hard work to allow data to be analyzed and corrected. However, there are still other essential aspects for the data cleaning process success which involve many technological areas: performance, semantic and process autonomy. Against this backdrop, an data cleaning environment has been developed which includes a collec-tion of tools for automatic data analysis and processing, extensible, with multi-language semantic and learning support. The objective of this work is to propose an environment whose contributions cover problems yet explored by data cleaning scienti c community as semantic and autonomy in data cleaning process and it has, among its objectives, to re-duce user interaction in the process of analyzing and correcting data inconsistencies and duplications. Among the contributions of the developed environment, e ciency is signi -cant exhibitions, covering approximately 90% of database inconsistencies, with the 0% of false positives cases without the user interaction need
|
315 |
Severity sensitive norm analysis and decision makingGasparini, Luca January 2017 (has links)
Normative systems have been proposed as a useful abstraction to represent ideals of behaviour for autonomous agents in a social context. They specify constraints that agents ought to follow, but may sometimes be violated. Norms can increase the predictability of a system and make undesired situations less likely. When designing normative systems, it is important to anticipate the effects of possible violations and understand how robust these systems are to violations. Previous research on robustness analysis of normative systems builds upon simplistic norm formalisms, lacking support for the specification of complex norms that are often found in real world scenarios. Furthermore, existing approaches do not consider the fact that compliance with different norms may be more or less important in preserving some desirable properties of a system; that is, norm violations may vary in severity. In this thesis we propose models and algorithms to represent and reason about complex norms, where their violation may vary in severity. We build upon existing preference-based deontic logics and propose mechanisms to rank the possible states of a system according to what norms they violate, and their severity. Further, we propose mechanisms to analyse the properties of the system under different compliance assumptions, taking into account the severity of norm violations. Our norm formalism supports the specification of norms that regulate temporally extended behaviour and those that regulate situations where other norms have been violated. We then focus on algorithms that allow coalitions of agents to coordinate their actions in order to minimise the risk of severe violations. We propose offline algorithms and heuristics for pre-mission planning in stochastic scenarios where there is uncertainty about the current state of the system. We then develop online algorithms that allow agents to maintain a certain degree of coordination and to use communication to improve their performance.
|
316 |
Detecção e Análise de Contornos em Imagens 2D. / Detection and analysis of contours on 2D images.Andrea Gomes Campos Bianchi 26 October 1998 (has links)
Neste trabalho apresentamos o desenvolvimento e a implementação de diversas técnicas de segmentação de imagens em termos de detecção de bordas, com um destaque especial para a segmentação não-linear. Os métodos considerados foram: o Gradiente, o Laplaciano da Gaussiana, a Regularização linear, e a segmentação não-linear usando o algoritmo Graduated Non Convexity, baseado na minimização de um funcional de energia associado à imagem. O tratamento matemático do funcional foi realizada segundo o paradigma do cálculo variacional. A sua principal vantagem é evidenciada durante o tratamento de bordas e descontinuidades, pois como a segmentação atua de forma não uniforme na imagem, apenas as regiões mais uniformes são suavizadas, preservando as descontinuidades, o que possibilita a conservação mais precisa dos contornos. Nos capítulos destinados a introdução das técnicas computacionais, apresentamos alguns exemplos das segmentações obtidas, possibilitando uma avaliação comparativa e qualitativa dos resultados. Aplicações em micrografias de cristais de KBr e de minerais serviram como um ensaio para a investigação da validação da segmentação através do algoritmo graduated Non Convexity. / In this work we describe the development and implementation of several image segmentation techniques, with special attention focused on non linear segmentation. The considered edge detection methods are: Gradient, Laplacian of Gaussian, linear regularization, and the non-linear Graduate Non Convexity segmentation algorithm based on the minimization of the energy functional associated with the image contour. The mathematical treatment was done according to the variational calculus paradigm. The major advantage of such an approach is noted during the treatment of borders and discontinuities, since this method causes the segmentation to act non-uniformelly on the image, in such a way that just the homogeneus regions are smoothed, while preserving discontinuities and enabling more exact localization of the contours. Along the charpters dedicated to introducing the techniques, we present some examples of segmented images, enabling the qualitative and quantitative evaluation of the results. Applications to micrographies of KB4 crystals and minerals in soil provide a possibility to investigate and validate the Graduate Non Convexity segmentation methods.
|
317 |
A security model for a virtualized information environmentTolnai, Annette 15 August 2012 (has links)
D.Phil. / Virtualization is a new infrastructure platform whose trend is sweeping through IT like a blaze. Improving the IT industry by higher utilization from hardware, better responsiveness to changing business conditions and lower cost operations is a must have in the new generation of virtualization solutions. Virtualization is not just one more entry in the long line of “revolutionary” products that have hit the technology marketplace. Many parts of the technology ecosystem will be affected as the paradigm shifts from the old one-to-one correspondence between software and hardware to the new approach of software operating on any hardware that happens to be most suitable to use at the time. This brings along with it security concerns, which need to be addressed. Security evolving in and around the virtualized system will become more pertinent the more virtualization is employed into everyday IT technology and use. In this thesis, a security model for virtualization will be developed and presented. This model will cover the different facets needed to address virtualization security.
|
318 |
Performance Evaluation of Cassandra in a Virtualized EnvironmentVellanki, Mohit January 2017 (has links)
Context. Apache Cassandra is an open-source, scalable, NoSQL database that distributes the data over many commodity servers. It provides no single point of failure by copying and storing the data in different locations. Cassandra uses a ring design rather than the traditional master-slave design. Virtualization is the technique using which physical resources of a machine are divided and utilized by various virtual machines. It is the fundamental technology, which allows cloud computing to provide resource sharing among the users. Objectives. Through this research, the effects of virtualization on Cassandra are observed by comparing the virtual machine arrangement to physical machine arrangement along with the overhead caused by virtualization. Methods. An experiment is conducted in this study to identify the aforementioned effects of virtualization on Cassandra compared to the physical machines. Cassandra runs on physical machines with Ubuntu 14.04 LTS arranged in a multi node cluster. Results are obtained by executing the mixed, read only and write only operations in the Cassandra stress tool on the data populated in this cluster. This procedure is repeated for 100% and 66% workload. The same procedure is repeated in virtual machines cluster and the results are compared. Results. Virtualization overhead has been identified in terms of CPU utilization and the effects of virtualization on Cassandra are found out in terms of Disk utilization, throughput and latency. Conclusions. The overhead caused due to virtualization is observed and the effect of this overhead on the performance of Cassandra has been identified. The consequence of the virtualization overhead has been related to the change in performance of Cassandra.
|
319 |
A sliding window BIRCH algorithm with performance evaluationsLi, Chuhe January 2017 (has links)
An increasing number of applications covered various fields generate transactional data or other time-stamped data which all belongs to time series data. Time series data mining is a popular topic in the data mining field, it introduces some challenges to improve accuracy and efficiency of algorithms for time series data. Time series data are dynamical, large-scale and high complexity, which makes it difficult to discover patterns among time series data with common methods suitable for static data. One of hierarchical-based clustering methods called BIRCH was proposed and employed for addressing the problems of large datasets. It minimizes the costs of I/O and time. A CF tree is generated during its working process and clusters are generated after four phases of the whole BIRCH procedure. A drawback of BIRCH is that it is not very scalable. This thesis is devoted to improve accuracy and efficiency of BIRCH algorithm. A sliding window BIRCH algorithm is implemented on the basis of BIRCH algorithm. At the end of thesis, the accuracy and efficiency of sliding window BIRCH are evaluated. A performance comparison among SW BIRCH, BIRCH and K-means are also presented with Silhouette Coefficient index and Calinski-Harabaz Index. The preliminary results indicate that the SW BIRCH may achieve a better performance than BIRCH in some cases.
|
320 |
User-interface evaluation metrics for a typical M-Learning applicationKantore, Adelin January 2011 (has links)
Usability is seen as an important aspect for the quality of an M-learning application. Yet very little research has been conducted in this area – particularly in South Africa. Even though the trials of M-learning projects have been witnessed in the country during the last five years, very little is known about the systems that were implemented as regards their usability. Additionally, metrics and measures used in evaluating usability have not been reported. A need exists for relevant metrics to M-learning usability. The primary objective of this work was to propose metrics and measures – for the purpose of evaluating the User-Interfacedesign usability of M-learning application. The research included a literature review of M-learning, as well as the development of metrics and measures based on the Goal Question Metric (GQM) Model. This model has helped provide a reference model and measurements for evaluating the User-InterfaceUsability. A case study was used as a research strategy. An application called Kontax was selected for evaluation by users and expert reviewers. Data-collection methods consisted of User Testing and Heuristics evaluations. Data-gathering instruments included the use of surveys and user- satisfaction questionnaires based on the proposed metrics, task scenarios and expert-reviewed questionnaires based on the proposed metrics; all these instruments were developed. It was found that, although the users thought the system was very interesting, and they wished to hear more about it in the future, the system nevertheless had usability flaws which made it difficult to use. All the users failed to register so that they could use the system; additionally, the system-error messages did not help the users recognize, and recover from an error – leaving the user to simply log out. Help was not adequate, thus making it difficult for first-time users to know what to do when they needed support. The system was also said to have a lot of information presented on its home page, which caused the user to be disoriented. The Kontax application has usability flaws which should be III attended to, in order to improve its usability. The proposed metrics proved to be very useful in evaluating the usability of the tool.
|
Page generated in 0.8703 seconds