Spelling suggestions: "subject:"knowledgebased"" "subject:"knowledge.based""
231 |
Artificial Intelligence-based Public Healthcare Systems: G2G Knowledge-based Exchange to Enhance the Decision-making ProcessNasseef, O.A., Baabdullah, A.M., Alalwan, A.A., Lal, Banita, Dwivedi, Y.K. 07 September 2021 (has links)
Yes / With the rapid evolution of data over the last few years, many new technologies have arisen with artificial intelligent (AI) technologies at the top. Artificial intelligence (AI), with its infinite power, holds the potential to transform patient healthcare. Given the gaps revealed by the 2020 COVID-19 pandemic in healthcare systems, this research investigates the effects of using an artificial intelligence-driven public healthcare framework to enhance the decision-making process using an extended model of Shaft and Vessey (2006) cognitive fit model in healthcare organizations in Saudi Arabia. The model was validated based on empirical data collected using an online questionnaire distributed to healthcare organizations in Saudi Arabia. The main sample participants were healthcare CEOs, senior managers/managers, doctors, nurses, and other relevant healthcare practitioners under the MoH involved in the decision-making process relating to COVID-19. The measurement model was validated using SEM analyses. Empirical results largely supported the conceptual model proposed as all research hypotheses are significantly approved. This study makes several theoretical contributions. For example, it expands the theoretical horizon of Shaft and Vessey's (2006) CFT by considering new mechanisms, such as the inclusion of G2G Knowledge-based Exchange in addition to the moderation effect of Experience-based decision-making (EDBM) for enhancing the decision-making process related to the COVID-19 pandemic. More discussion regarding research limitations and future research directions are provided as well at the end of this study.
|
232 |
Big data analytics capability and market performance: The roles of disruptive business models and competitive intensityOlabode, Oluwaseun E., Boso, N., Hultman, M., Leonidou, C.N. 08 October 2021 (has links)
Yes / Research shows that big data analytics capability (BDAC) is a major determinant of firm performance. However, scant research has theoretically articulated and empirically tested the mechanisms and conditions under which BDAC influences performance. This study advances existing knowledge on the BDAC–performance relationship by drawing on the knowledge-based view and contingency theory to argue that how and when BDAC influences market performance is dependent on the intervening role of disruptive business models and the contingency role of competitive intensity. We empirically test this argument on primary data from 360 firms in the United Kingdom. The results show that disruptive business models partially mediate the positive effect of BDAC on market performance, and this indirect positive effect is strengthened when competitive intensity increases. These findings provide new perspectives on the business model processes and competitive conditions under which firms maximize marketplace value from investments in BDACs.
|
233 |
How and when does big data analytics capability contribute to market performanceOlabode, Oluwaseun E., Boso, N., Hultman, Magnus, Leonidou, C.N. 19 September 2023 (has links)
Yes / This study looks at the relationship between big data analytics capability and market
performance and how this relationship can be facilitated by adopting disruptive business models in
competitive environments.
|
234 |
Prediction and analysis of model’s parameters of Li-ion battery cellsDareini, Ali January 2016 (has links)
Lithium-ion batteries are complex systems and making a simulation model of them is always challenging. A method for producing an accurate model with high capabilities for predicting the behavior of the battery in a time and cost efficient way is desired in this field of work. The aim of this thesis has been to develop a method to be close to the desired method as much as possible, especially in two important aspects, time and cost. The method which is the goal of this thesis should fulfill the below five requirements: 1. Able to produce a generic battery model for different types of lithium-ion batteries 2. No or low cost for the development of the model 3. A time span around one week for obtaining the model 4. Able to predict the most aspects of the battery’s behavior like the voltage, SOC, temperature and, preferably, simulate the degradation effects, safety and thermal aspects 5. Accuracy with less than 15% error The start point of this thesis was the study of current methods for cell modeling. Based on their approach, they are divided into three categories, abstract, black box and white box methods. Each of these methods has its own advantages and disadvantages, but none of them are able to fulfill the above requirements. This thesis presents a method, called “gray box”, which is, partially, a mix of the black and white boxes’ concepts. The gray box method uses values for model’s parameters from different sources. Firstly, some chemical/physical measurements like in the case of the white box method, secondly, some of the physical tests/experiments used in the case of the black box method and thirdly, information provided by cell datasheets, books, papers, journals and scientific databases. As practical part of this thesis, a prismatic cell, EIG C20 with 20Ah capacity was selected as the sample cell and its electrochemical model was produced with the proposed method. Some of the model’s parameters are measured and some others are estimated. Also, the abilities of AutoLion, a specialized software for lithium-ion battery modeling were used to accelerate the modeling process. Finally, the physical tests were used as part of the references for calculating the accuracy of the produced model. The results show that the gray box method can produce a model with nearly no cost, in less than one week and with error around 30% for the HPPC tests and, less than this, for the OCV and voltage tests. The proposed method could, largely, fulfill the five mentioned requirements. These results were achieved even without using any physical tests/experimental data for tuning the parameters, which is expected to reduce the error considerably. These are promising results for the idea of the gray box which is in its nascent stages and needs time to develop and be useful for commercial purposes.
|
235 |
Inductive machine learning bias in knowledge-based neurocomputingSnyders, Sean 04 1900 (has links)
Thesis (MSc) -- Stellenbosch University , 2003. / ENGLISH ABSTRACT: The integration of symbolic knowledge with artificial neural networks is becoming an
increasingly popular paradigm for solving real-world problems. This paradigm named
knowledge-based neurocomputing, provides means for using prior knowledge to determine
the network architecture, to program a subset of weights to induce a learning bias
which guides network training, and to extract refined knowledge from trained neural
networks. The role of neural networks then becomes that of knowledge refinement. It
thus provides a methodology for dealing with uncertainty in the initial domain theory.
In this thesis, we address several advantages of this paradigm and propose a solution
for the open question of determining the strength of this learning, or inductive, bias.
We develop a heuristic for determining the strength of the inductive bias that takes the
network architecture, the prior knowledge, the learning method, and the training data
into consideration.
We apply this heuristic to well-known synthetic problems as well as published difficult
real-world problems in the domain of molecular biology and medical diagnoses. We
found that, not only do the networks trained with this adaptive inductive bias show
superior performance over networks trained with the standard method of determining
the strength of the inductive bias, but that the extracted refined knowledge from these
trained networks deliver more concise and accurate domain theories. / AFRIKAANSE OPSOMMING: Die integrasie van simboliese kennis met kunsmatige neurale netwerke word 'n toenemende
gewilde paradigma om reelewereldse probleme op te los. Hierdie paradigma
genoem, kennis-gebaseerde neurokomputasie, verskaf die vermoe om vooraf kennis te
gebruik om die netwerkargitektuur te bepaal, om a subversameling van gewigte te
programeer om 'n leersydigheid te induseer wat netwerkopleiding lei, en om verfynde
kennis van geleerde netwerke te kan ontsluit. Die rol van neurale netwerke word dan die
van kennisverfyning. Dit verskaf dus 'n metodologie vir die behandeling van onsekerheid
in die aanvangsdomeinteorie.
In hierdie tesis adresseer ons verskeie voordele wat bevat is in hierdie paradigma en stel
ons 'n oplossing voor vir die oop vraag om die gewig van hierdie leer-, of induktiewe
sydigheid te bepaal. Ons ontwikkel 'n heuristiek vir die bepaling van die induktiewe
sydigheid wat die netwerkargitektuur, die aanvangskennis, die leermetode, en die data
vir die leer proses in ag neem.
Ons pas hierdie heuristiek toe op bekende sintetiese probleme so weI as op gepubliseerde
moeilike reelewereldse probleme in die gebied van molekulere biologie en mediese diagnostiek.
Ons bevind dat, nie alleenlik vertoon die netwerke wat geleer is met die
adaptiewe induktiewe sydigheid superieure verrigting bo die netwerke wat geleer is met
die standaardmetode om die gewig van die induktiewe sydigheid te bepaal nie, maar
ook dat die verfynde kennis wat ontsluit is uit hierdie geleerde netwerke meer bondige
en akkurate domeinteorie lewer.
|
236 |
Big Data: A Toll for all Strategic Decisions : A Study of Three Large Food and Beverage Processing OrganizationsArsenovic, Jasenko January 2015 (has links)
I will look at what impact big data have had on the managerial strategic decisions in the food and beverage industry. This in order to understand the complexity and theory of organizational strategic management, an effort to define the contemporary strategic theory into a holistic conceptual model is done through a literature review on organizational strategy. This literature explicitly proposes four distinctly different types of strategies that management need to consider in the organizational context. Namely, long-term strategy, internal business strategy, external corporate strategy, and competitive strategy. The study analyzed the food and beverage industry over a decade (2005-2014), where the three of the largest actors in the industry were selected, Nestlé S.A, PepsiCo Inc, and Unilever. The choice of method was content analysis, where three structured categorization matrixes were developed which each analyzed parts of the annual reports. The study propose the role of big data as a strategic tool for managerial decision from a theoretical standpoint. The content analysis show that hypothesis 1, could be confirmed, big data have an impact on all the proposed four managerial strategic decisions. Second hypothesis could not be confirmed, since decentralization does only occur for one of the organizations, but increased external environment turbulence could be concluded for the industry in general. The third hypothesis could be confirmed, which show that there is an increase in individualization due to increased customer involvement and demand. The analysis discovered three distinct time periods during the last decade, namely pre- economic instability period (2005-2007), economic instability period (2008-2011) and finally the post-paradigm period (2012-2014). Where the year 2011 was the most turbulent in terms of economy and technology for the industry. The study clearly show that customers are now involved in the production process, customers are co-creators of the products. There is now a two-way communication and increased social responsibility awareness. This study shows that the old traditional approach of looking at markets in order to position yourself to stay competitive are obsolete as this study predicted. customers demand to be a part of the organizational culture. This conclude that big data is an important tool for all strategic managerial decisions.
|
237 |
The development of a hybrid knowledge-based Collaborative Lean Manufacturing Management (CLMM) system for an automotive manufacturing environment : the development of a hybrid Knowledge-Based (KB)/ Analytic Hierarchy Process (AHP)/ Gauging Absences of Pre-Requisites (GAP) Approach to the design of a Collaborative Lean Manufacturing Management (CLMM) system for an automotive manufacturing environmentMoud Nawawi, Mohd Kamal January 2009 (has links)
The automotive manufacturing facility is extremely complex and expensive system. Managing and understanding the dynamics of automotive manufacturing is a challenging endeavour. In the current era of dynamic global competition, a new concept such as Collaborative Lean Manufacturing Management (CLMM) can be implemented as an alternative for organisations to improve their Lean Manufacturing Management (LMM) processes. All members in the CLMM value chain must work together towards common objectives in order to make the LMM achievable in the collaborative environment. The novel research approach emphasises the use of Knowledge-Based (KB) approach in such activities as planning, designing, assessing and providing recommendations of CLMM implementation, through: a) developing the conceptual CLMM model; b) designing the KBCLMM System structure based on the conceptual model; and c) implementing Gauging Absences of Pre-requisites (GAP) analysis and Analytic Hierarchy Process (AHP) approach in the hybrid KBCLMM. The development of KBCLMM Model is the most detailed part in the research process and consists of five major components in two stages. Stage 1 (Planning stage) consists of Organisation Environment, Collaborative Business and Lean Manufacturing components. Stage 2 (Design stage) consists of Organisation CLMM Capability and Organisation CLMM Alignment components. Each of these components consists of sub-components and activities that represent particular issues in the CLMM development. From the conceptual model, all components were transformed into the KBCLMM System structure, which is embedded with the GAP and AHP techniques, and thus, key areas of potential improvement in the LMM are identified for each activity along with the identification of both qualitative and quantitative aspects for CLMM implementation. In order to address the real situation of CLMM operation, the research validation was conducted for an automotive manufacturer's Lean Manufacturing Chain in Malaysia. Published case studies were also used to test several modules for their validity and reliability. This research concludes that the developed KBCLMM System is an appropriate Decision Support System tool to provide the opportunity for academics and industrialists from the fields of industrial engineering, information technology, and operation management to plan, design and implement LMM for a collaborative environment.
|
238 |
Exploring the design space of aluminium tubing using knowledge objects and FEMPatil, Aniket, Chebbi, Girish January 2008 (has links)
No description available.
|
239 |
Business Valuation : How to Value Private Limited Knowledge Based CompaniesOlsson, Fredrik, Persson, Martin January 2009 (has links)
<p><strong>Abstract </strong></p><p><strong>Purpose </strong>The purpose of this study is to investigate the methods used for valuating private limited knowledge based companies and if a new approach is required, create or modify a foundation that will constitute as a base within the valuation process.</p><p><strong>Method </strong>This is a qualitative study using interviews to obtain primary data. People working in the valuation industry were contacted and we got eight respondents. The questions were designed to answer our purpose and research questions. Telephone interviews were chosen due to the fact that we believed the response would be higher. <strong></strong></p><p><strong> </strong></p><p><strong>Frame of References </strong>The theories used in this section is divided into three parts; the financial analysis including traditional valuating methods such as the Discounted Cash Flow model and relative valuating and multiples. The non-financial analysis focus on the underlying analysis consistent of structural- and intellectual capital and also value drivers that are creating value for the firm. In the end other theories concerning the analysis are presented, such as the risk-return trade-off, risk rating systems and analytical hierarchy process. <strong> </strong></p><p><strong> </strong></p><p><strong>Empirical Findings </strong>In this section the presentations of the respondents’ answers and</p><p><strong>and Analysis </strong>a brief analysis related to each question. After this an extended analysis is presented focusing on the subject and our risk scheme and guidelines we created/modified. The extended analysis is connected to the respondents’ answers. The purpose of this section is to have a better understanding about the risk of transient intellectual capital and give recommendations how to handle it. Also, guidelines of how to weight different value driver are discussed.</p><p><strong>Conclusion </strong>We concluded that all valuations utilize more than one approach in order to estimate the most accurate value for the company. For knowledge based companies the biggest risk with a M&A transaction is the probability of diminishing the intellectual capital. We constructed a model that will manage this risk based on our interviews and established theories.</p><p> </p>
|
240 |
Unsupervised Knowledge-based Word Sense Disambiguation: Exploration & Evaluation of Semantic SubgraphsManion, Steve Lawrence January 2014 (has links)
Hypothetically, if you were told: Apple uses the apple as its logo . You would
immediately detect two different senses of the word apple , these being the
company and the fruit respectively. Making this distinction is the formidable
challenge of Word Sense Disambiguation (WSD), which is the subtask of
many Natural Language Processing (NLP) applications. This thesis is a
multi-branched investigation into WSD, that explores and evaluates unsupervised
knowledge-based methods that exploit semantic subgraphs. The
nature of research covered by this thesis can be broken down to:
1. Mining data from the encyclopedic resource Wikipedia, to visually
prove the existence of context embedded in semantic subgraphs
2. Achieving disambiguation in order to merge concepts that originate
from heterogeneous semantic graphs
3. Participation in international evaluations of WSD across a range of
languages
4. Treating WSD as a classification task, that can be optimised through
the iterative construction of semantic subgraphs
The contributions of each chapter are ranged, but can be summarised by
what has been produced, learnt, and raised throughout the thesis. Furthermore
an API and several resources have been developed as a by-product
of this research, all of which can be accessed by visiting the author’s home
page at http://www.stevemanion.com. This should enable researchers to
replicate the results achieved in this thesis and build on them if they wish.
|
Page generated in 0.0405 seconds