Spelling suggestions: "subject:"finegrained"" "subject:"finergrained""
21 |
Constitutive Behaviour Of Partly Saturated Fine Grained SoilsHerkal, R N 07 1900 (has links) (PDF)
No description available.
|
22 |
Deep Learning Approaches on the Recognition of Affective Properties of Images / 深層学習を用いた画像の情動的属性の認識Yamamoto, Takahisa 23 September 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22800号 / 情博第730号 / 新制||情||125(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)准教授 中澤 篤志, 教授 西野 恒, 教授 鹿島 久嗣 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DGAM
|
23 |
A Chemical/Powder Metallurgical Route to Fine-Grained Refractory AlloysSona N Avetian (6984974) 07 August 2021 (has links)
Ni-based superalloys remain state-of-the-art materials for use in the high-temperature,
corrosive environments experienced by turbine blades in gas turbine engines used for propulsion
and energy generation. Increasing the operating temperatures of turbine engines can yield
increased engine efficiencies. However, appreciably higher operational temperatures can exceed
the capabilities of Ni-based superalloys. Consequently, interest exists to develop high-melting
refractory complex concentrated alloys (RCCAs) with the potential to surpass the hightemperature property limitations of Ni-based alloys. RCCAs are multi-principal element alloys,
often comprising 5 or more elements in equal or near equal amounts. Conventional solidificationbased processing methods (e.g., arc melting) of RCCAs tend to yield coarse-grained samples with
a large degree of microsegregation, often requiring long subsequent homogenization annealing
times. Additionally, the large differences in melting temperatures of component elements can
further complicate solidification-based fabrication of RCCAs. <div>Herein, the feasibility of a new chemical synthesis, powder metallurgy route for generating
fine-grained, homogenous RCCAs is demonstrated. This is achieved by first employing the
Pechini method, which is a well-developed process for generating fine-grained, oxide powder
mixtures. The fine oxide powder mixture is then reduced at a low temperature (600°C-770 ºC) to
yield fine-grained metal alloy powder. Hot pressing of the metallic powder is then used to achieve
dense, fine-grained metallic alloys. While this process is demonstrated for generating fine-grained,
high-melting MoW and MoWCr alloys, this method can be readily extended to generate other finegrained RCCA compositions, including those unachievable by solidification-based processing
methods.</div>
|
24 |
Vliv velikosti vneseného tepla na vybrané vlastnosti svaru jemnozrnné oceli. / Influence of stored heat on the choice properties of fine - grained steel welding.Urban, Vratislav January 2011 (has links)
The diploma thesis is focused on characterization of fine – grained steels, choosing of welding technology and effects of heat transmission on weld joint quality. The effect of transmitted heat on microscopic structure and stiffness of weld have been studied. Based on obtained results the optimal parameters for welding of fine - grained steel have been determined.
|
25 |
Fine-grained sentiment analysis of product reviews in SwedishWestin, Emil January 2020 (has links)
In this study we gather customer reviews from Prisjakt, a Swedish price comparison site, with the goal to study the relationship between review and rating, known as sentiment analysis. The purpose of the study is to evaluate three different supervised machine learning models on a fine-grained dependent variable representing the review rating. For classification, a binary and multinomial model is used with the one-versus-one strategy implemented in the Support Vector Machine, with a linear kernel, evaluated with F1, accuracy, precision and recall scores. We use Support Vector Regression by approximating the fine-grained variable as continuous, evaluated using MSE. Furthermore, three models are evaluated on a balanced and unbalanced dataset in order to investigate the effects of class imbalance. The results show that the SVR performs better on unbalanced fine-grained data, with the best fine-grained model reaching a MSE 4.12, compared to the balanced SVR (6.84). The binary SVM model reaches an accuracy of 86.37% and weighted F1 macro of 86.36% on the unbalanced data, while the balanced binary SVM model reaches approximately 80% for both measures. The multinomial model shows the worst performance due to the inability to handle class imbalance, despite the implementation of class weights. Furthermore, results from feature engineering shows that SVR benefits marginally from certain regex conversions, and tf-idf weighting shows better performance on the balanced sets compared to the unbalanced sets.
|
26 |
A User-Centric Security Policy Enforcement Framework for Hybrid Mobile ApplicationsSunkaralakunta Venkatarama Reddy, Rakesh 26 September 2019 (has links)
No description available.
|
27 |
Attribute-based Approaches for Secure Data Sharing in IndustryChiquito, Alex January 2022 (has links)
The Industry 4.0 revolution relies heavily on data to generate value, innovation, new services, and optimize current processes [1]. Technologies such as Internet of Things (IoT), machine learning, digital twins, and much more depend directly on data to bring value and innovation to both discrete manufacturing and process industries. The origin of data may vary from sensor data to financial statements and even strictly confidential user or business data. In data-driven ecosystems, collaboration between different actors is often needed to provide services such as analytics, logistics, predictive maintenance, process improvement, and more. Data therefore cannot be considered a corporate internal asset only. Hence, data needs to be shared among organizations in a data-driven ecosystem for it to be used as a strategic resource for creating desired values, innovations, or process improvements [2]. When sharing business critical and sensitive data, the access to the data needs to be accurately controlled to prevent leakage to authorized users and organizations. Access control is a mechanism to control actions of users over objects, e.g., to read, write, and delete files, accessing data, writing over registers, and so on. This thesis studies one of the latest access control mechanisms in Attribute Based Access Control (ABAC) for industrial data sharing. ABAC emerges as an evolution of the commonly industry-wide used Role-based Access Control. ABAC presents the idea of attributes to create access policies, rather than manually assigned roles or ownerships, enabling for expressive fine-granular access control policies. Furthermore, this thesis presents approaches to implement ABAC into industrial IoT data sharing applications, with special focus on the manageability and granularity of the attributes and policies. The thesis also studies the implications of outsourced data storage on third party cloud servers over access control for data sharing and explores how to integrate cryptographic techniques and paradigms into data access control. In particular, the combination of ABAC and Attribute-Based Encryption (ABE) is investigated to protect privacy over not-fully trusted domains. In this, important research gaps are identified. / Arrowhead Tools
|
28 |
AN APPROACH FOR FINE-GRAINED PROFILING OF PARALLEL APPLICATIONSDESHMUKH, AMOL S. 07 October 2004 (has links)
No description available.
|
29 |
M3D: Multimodal MultiDocument Fine-Grained Inconsistency DetectionTang, Chia-Wei 10 June 2024 (has links)
Validating claims from misinformation is a highly challenging task that involves understanding how each factual assertion within the claim relates to a set of trusted source materials. Existing approaches often make coarse-grained predictions but fail to identify the specific aspects of the claim that are troublesome and the specific evidence relied upon. In this paper, we introduce a method and new benchmark for this challenging task. Our method predicts the fine-grained logical relationship of each aspect of the claim from a set of multimodal documents, which include text, image(s), video(s), and audio(s). We also introduce a new benchmark (M^3DC) of claims requiring multimodal multidocument reasoning, which we construct using a novel claim synthesis technique. Experiments show that our approach significantly outperforms state-of-the-art baselines on this challenging task on two benchmarks while providing finer-grained predictions, explanations, and evidence. / Master of Science / In today's world, we are constantly bombarded with information from various sources, making it difficult to distinguish between what is true and what is false. Validating claims and determining their truthfulness is an essential task that helps us separate facts from fiction, but it can be a time-consuming and challenging process. Current methods often fail to pinpoint the specific parts of a claim that are problematic and the evidence used to support or refute them.
In this study, we present a new method and benchmark for fact-checking claims using multiple types of information sources, including text, images, videos, and audio. Our approach analyzes each aspect of a claim and predicts how it logically relates to the available evidence from these diverse sources. This allows us to provide more detailed and accurate assessments of the claim's validity. We also introduce a new benchmark dataset called M^3DC, which consists of claims that require reasoning across multiple sources and types of information. To create this dataset, we developed a novel technique for synthesizing claims that mimic real-world scenarios. Our experiments show that our method significantly outperforms existing state-of-the-art approaches on two benchmarks while providing more fine-grained predictions, explanations, and evidence. This research contributes to the ongoing effort to combat misinformation and fake news by providing a more comprehensive and effective approach to fact-checking claims.
|
30 |
Generalizing the Utility of Graphics Processing Units in Large-Scale Heterogeneous Computing SystemsXiao, Shucai 03 July 2013 (has links)
Today, heterogeneous computing systems are widely used to meet the increasing demand for high-performance computing. These systems commonly use powerful and energy-efficient accelerators to augment general-purpose processors (i.e., CPUs). The graphic processing unit (GPU) is one such accelerator. Originally designed solely for graphics processing, GPUs have evolved into programmable processors that can deliver massive parallel processing power for general-purpose applications.
Using SIMD (Single Instruction Multiple Data) based components as building units; the current GPU architecture is well suited for data-parallel applications where the execution of each task is independent. With the delivery of programming models such as Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), programming GPUs has become much easier than before. However, developing and optimizing an application on a GPU is still a challenging task, even for well-trained computing experts. Such programming tasks will be even more challenging in large-scale heterogeneous systems, particularly in the context of utility computing, where GPU resources are used as a service. These challenges are largely due to the limitations in the current programming models: (1) there are no intra-and inter-GPU cooperative mechanisms that are natively supported; (2) current programming models only support the utilization of GPUs installed locally; and (3) to use GPUs on another node, application programs need to explicitly call application programming interface (API) functions for data communication.
To reduce the mapping efforts and to better utilize the GPU resources, we investigate generalizing the utility of GPUs in large-scale heterogeneous systems with GPUs as accelerators. We generalize the utility of GPUs through the transparent virtualization of GPUs, which can enable applications to view all GPUs in the system as if they were installed locally. As a result, all GPUs in the system can be used as local GPUs. Moreover, GPU virtualization is a key capability to support the notion of "GPU as a service." Specifically, we propose the virtual OpenCL (or VOCL) framework for the transparent virtualization of GPUs. To achieve good performance, we optimize and extend the framework in three aspects: (1) optimize VOCL by reducing the data transfer overhead between the local node and remote node; (2) propose GPU synchronization to reduce the overhead of switching back and forth if multiple kernel launches are needed for data communication across different compute units on a GPU; and (3) extend VOCL to support live virtual GPU migration for quick system maintenance and load rebalancing across GPUs.
With the above optimizations and extensions, we thoroughly evaluate VOCL along three dimensions: (1) show the performance improvement for each of our optimization strategies; (2) evaluate the overhead of using remote GPUs via several microbenchmark suites as well as a few real-world applications; and (3) demonstrate the overhead as well as the benefit of live virtual GPU migration. Our experimental results indicate that VOCL can generalize the utility of GPUs in large-scale systems at a reasonable virtualization and migration cost. / Ph. D.
|
Page generated in 0.0942 seconds