21 |
Willingness to pay for personalised nutrition across EuropeFischer, A.R.H., Berezowska, A., van der Lans, I.A., Ronteltap, A., Rankin, A., Kuznesof, S., Poínhos, R., Stewart-Knox, Barbara, Frewer, L.J. 02 1900 (has links)
Yes / Personalised nutrition (PN) may promote public health. PN involves dietary advice based on
individual characteristics of end users and can for example be based on lifestyle, blood and/or DNA profiling.
Currently, PN is not refunded by most health insurance or health care plans. Improved public health is contingent
on individual consumers being willing to pay for the service. Methods: A survey with a representative sample from
the general population was conducted in eight European countries (N = 8233). Participants reported their will-
25 ingness to pay (WTP) for PN based on lifestyle information, lifestyle and blood information, and lifestyle and DNA
information. WTP was elicited by contingent valuation with the price of a standard, non-PN advice used as
reference. Results: About 30% of participants reported being willing to pay more for PN than for non-PN
advice. They were on average prepared to pay about 150% of the reference price of a standard, non-personalised
advice, with some differences related to socio-demographic factors. Conclusion: There is a potential market for PN
30 compared to non-PN advice, particularly among men on higher incomes. These findings raise questions to what
extent personalized nutrition can be left to the market or should be incorporated into public health programs / EC (FW7) funded Food4me project
|
22 |
Sex and age differences in attitudes and intention to adopt personalised nutrition in a UK sampleStewart-Knox, Barbara, Poinhos, R., Fischer, A.R.H., Chaudhrey, M., Rankin, A. 19 November 2021 (has links)
Yes / There has been an increase in development of technologies that can deliver personalised dietary advice. Devising healthy, sustainable dietary plans will mean taking consideration of extrinsic factors such as individual social circumstances. The aim of this study was to identify societal groups more or less receptive to and likely to engage with digitally delivered personalised nutrition initiatives.
Sample and Methods: Volunteers were recruited via a social research agency from within the UK. The resultant sample (N=1061) was 49% female, aged 18-65 years.
Results: MANOVA (Tukey HSD applied) indicated that females and younger people (aged 18-29 years) had more favourable attitudes and were more likely to intend to adopt personalised nutrition. There were no differences in attitude toward or intention to adopt personalised nutrition between different education levels, income brackets or occupational groups.
Conclusion: These results imply that females and younger people may be most likely to adopt personalised nutrition in the future. Initiatives to promote personalised nutrition should target males and older people.
|
23 |
Normalising the Implementation of Pharmacogenomic (PGx) Testing in Adult Mental Health Settings: A Theory-Based Systematic ReviewJameson, Adam, Tomlinson, Justine, Medlinskiene, Kristina, Dane, Howard, Saeed, Imran, Sohal, J., Dalton, C., Sagoo, G.S., Cardno, A., Bristow, Greg C., Fylan, Beth, McLean, Samantha 18 September 2024 (has links)
Yes / Pharmacogenomic (PGx) testing can help personalise psychiatric prescribing and improve on the currently adopted trial-and-error prescribing approach. However, widespread implementation is yet to occur. Understanding factors influencing implementation is pertinent to the psychiatric PGx field. Normalisation Process Theory (NPT) seeks to understand the work involved during intervention implementation and is used by this review (PROSPERO: CRD42023399926) to explore factors influencing PGx implementation in psychiatry. Four databases were systematically searched for relevant records and assessed for eligibility following PRISMA guidance. The QuADS tool was applied during quality assessment of included records. Using an abductive approach to codebook thematic analysis, barrier and facilitator themes were developed using NPT as a theoretical framework. Twenty-nine records were included in the data synthesis. Key barrier themes included a PGx knowledge gap, a lack of consensus in policy and guidance, and uncertainty towards the use of PGx. Facilitator themes included an interest in PGx use as a new and improved approach to prescribing, a desire for a multidisciplinary approach to PGx implementation, and the importance of fostering a climate for PGx implementation. Using NPT, this novel review systematically summarises the literature in the psychiatric PGx implementation field. The findings highlight a need to develop national policies on using PGx, and an education and training workforce plan for mental health professionals. By understanding factors influencing implementation, the findings help to address the psychiatric PGx implementation gap. This helps move clinical practice closer towards a personalised psychotropic prescribing approach and associated improvements in patient outcomes. Future policy and research should focus on the appraisal of PGx implementation in psychiatry and the role of pharmacists in PGx service design, implementation, and delivery.
|
24 |
Personaliserade erbjudanden inom marknadsföring : En kvalitativ studie angående konsumenternas upplevda respons av personaliserade erbjudanden från företag med avseende på förtroende, kontroll och transparens / Personalised offers in marketing : A qualitative study regarding consumers' perceived response of personalised offers from companies with regard to trust, control and transparencyKaskamo Andersson, Viktor January 2021 (has links)
Nyligen har forskning inom marknadsföring fokuserat på konsumenternas köpbeteende och köpmönster, vilket har haft stor inverkan genom digitaliseringen i samhället och runt om i världen. Allt fler vetenskapliga studier forskar om konsumenternas personliga integritet och hur den kan påverkas av personliga erbjudanden från företag, till exempel genom marknadsföringsmetoden 'Retargeting'. Personliga erbjudanden, även kallade skräddarsydda erbjudanden, blir alltmer synliga i B2C för att "sticka ut" från klustret av personaliserade erbjudanden och öka företagens försäljning och vinst genom att sälja produkter och tjänster. Konsumenterna kan dock uppleva både en positiv och en negativ upplevd respons baserat på dessa personaliserade erbjudanden från företag. Fem (5) tydliga faktorer som kan kopplas till konsumenternas upplevda respons är; (1) reaktans, (2) irritation, (3) bojkott, (4) integritetsoro samt (5) attraktion, varav endast den senare uppfattas som positiv. Anledningen till dessa kan bero på tre (3) nyckelfaktorer som kan påverka effekten av anpassade erbjudanden på konsumenterna; graden av (1) förtroende, (2) graden av kontroll och (3) graden av transparens. Flera vetenskapliga studier inom detta område diskuterar både likheter och skillnader mellan effekterna och konsekvenserna som uppstår och hur de utmanar balansen mellan företag och konsumenter. Å ena sidan får konsumenter personifierade erbjudanden som företag antar att konsumenter upplever som attraktiva, å andra sidan samlar företag, lagrar och använder konsumenternas personliga data i en omfattande utsträckning eftersom ett mer digitalt samhälle ständigt kräver att konsumenter alltmer kopplar upp sig digitalt, till exempel via sociala medier och företagswebbplatser. Denna studie undersöker därför de tre nyckelfaktorerna separat men också interaktionen mellan dessa och hur de påverkar personliga erbjudanden från företag i förhållande till konsumenternas upplevda respons. Förtroende upplevdes som en avgörande faktor som kan förbättra relationen mellan företag och konsument, däremot ansågs den även som en skör tråd då ett minskat eller förlorat förtroende kan leda till negativa konsekvenser. Graden av kontroll ansågs som tillräcklig till en viss nivå, då det konstaterades att valmöjligheter för konsumenten att kunna kontrollera den personliga informationen om de vill var nödvändigt. Graden av transparens konstaterades som en väsentlig faktor för att skapa en ökad förståelse och insyn i företagens hantering av den personliga data som konsumenterna delar med sig av, vilket företagen samlar in, lagrar och använder i deras personaliserade erbjudanden för att matcha konsumenternas upplevda behov. / Research in marketing is heavily focused on consumers' buying behaviour and buying patterns, which has had a major impact through the advancement of digitalisation in society and around the world. More and more scientific studies are being conducted regarding consumers' personal integrity and how it can be affected by personalised offers from companies, for example through the marketing method 'Retargeting'. Personalised offers, also called tailored offers, are becoming increasingly visible in Business-to-Consumers (B2C) companies to "stand out" from everyday offerings and increase the companies' sales and profits by selling targeted products and services to individual users. There are five unique factors that can determine a consumers' perceived response: (1) reactance, (2) irritation, (3) boycott, (4) concern for integrity and (5) attraction, of which only the latter is perceived as positive. The reason for these is due to three key factors that will affect the effectiveness of personalised offers on consumers: (1) the degree of trust, (2) the degree of control and (3) the degree of transparency. Several scientific studies in this area discuss both similarities and differences between the effects and the consequences that arise and how they challenge the balance of purchasing between companies and consumers. On the one hand, companies provide consumers with personalised offers which can been perceived as attractive to the buyer, on the other hand, companies collect, store, and use consumers' personal data for company gains regardless of conduct of use. We live in an age where consumers are constantly connecting to the digital society and companies can collect and store the data from social media, and websites. This study therefore examines the three key factors: trust, control, and transparency. Additionally, the interaction between these and how they affect personalised offers from companies in relation to consumers' perceived response. Trust was perceived as a decisive factor that can improve the relationship between companies and consumers, however, trust was also considered as a fragile factor in case of reduced or lost trust which can lead to negative consequences. The degree of control was considered sufficient to a certain level, as it was found that the consumers’ choices regarding to be able to control their personal information were perceived as necessary. The degree of transparency was identified as an essential factor in creating a greater understanding and transparency in companies management of the personal data that consumers share, which companies collect, store and use in their personalised offerings to match consumers' perceived needs.
|
25 |
An exploratory study on perceptions of personalised display ads online : A comparison of Swedish generations: Do consumers willingly surrender their privacy for the usefulness of personalised advertising?Gerdman, Thomas, Nordqvist, Felicia January 2017 (has links)
The Swedish consumers are concerned of their online privacy, while companies increasingly gather personal information with business intelligence (BI) technologies in order to customize online banner ads, among the favoured marketing techniques. Meanwhile, marketers treasure the opportunity to target individuals. The purpose of this research is to generate insights of Swedes’ experiences of intrusion of their privacy online, and their behavioural response to personalised banner advertisements. The research will also observe if there are differences depending on the consumers’ ages. Mediating factors will be regarded how they influence the online users perceived intrusiveness and usefulness of personalised ads. The study is exploratory and aims to provide extensive awareness and beliefs around a complex phenomena. It will have a qualitative approach where data collection is conducted through semi-structured, in-depth, interviews with Swedish consumers from two age groups, complemented by three expert interviews. The results show that, in comparison, elderly consumers have less knowledge of personalised advertising, as well as BI-technology, leading to higher privacy concern and perceived intrusiveness when exposed to these ads. Members of the generation Y comprehend the phenomena to a greater extent, and more easily see the usefulness presented, but are overall ambiguous. Attitudes are likely to be formed based on the experienced intrusiveness contra usefulness, but are not clearly influencing trust, loyalty of future purchasing behaviours. Generally, marketers and consumers’ views show incongruence, as marketers remain very positive to using personal information to customize ads, while consumers do not always perceive it similarly. A balance can be difficult to achieve, but a unanimous belief demand high accuracy of content and placement of the personalised ads to be perceived as useful.
|
26 |
An actor-network theory reading of change for looked after childrenParker, Elisabeth January 2016 (has links)
The education of looked-after children (LAC) in the care of the Local Authority (LA) is supported by government initiatives to reduce the attainment gap that exists between LAC and their peers. Long-term outcomes for LAC pupils are poor (Sebba et al. 2015). The Virtual School (VS) has a statutory role in the education of LAC (DfE, 2014a) and aims to encourage stringent monitoring and intervention for LAC pupils, for example via a personalised education plan (PEP) outlining attainment, strategies intended to accelerate progress, and resources needed for doing so. The PEP process involves termly meetings between pupil, Social Worker and school's designated teacher. The current study uses Actor-Network Theory (ANT) (Latour, 1999) as a lens through which to conceptualise change for LAC pupils during the PEP process. Data was collected from three PEP meetings and accompanying documentation in one LA setting, using ethnomethodology, in order to explore the human and non-human actors in the PEP network which are active in creating change for LAC. The analysis made visible the strong role of the PEP document in providing structure for the meeting, along with the instrumental role of the designated teacher and their knowledge of the pupil embodied in non-human entities such as resources, timetabling and grades. The Social Worker influence on the network was less visible. ANT is explored as a material semiotic tool for analysis through a conceptual review of current literature within educational research, with a focus on the construction of research questions. The review demonstrates that ANT can attempt to answer questions about 'how' things came to be and 'who' and 'what' they are composed of. The current research also incorporates an appraisal of evidence-based practice, and a consideration of the implications and dissemination of the findings of the study at LA level and beyond.
|
27 |
Personalisation of web information search: an agent based approachGopinathan-Leela, Ligon, n/a January 2005 (has links)
The main purpose of this research is to find an effective way to personalise information
searching on the Internet using middleware search agents, namely, Personalised Search
Agents (PSA). The PSA acts between users and search engines, and applies new and existing
techniques to mine and exploit relevant and personalised information for users.
Much research has already been done in developing personalising filters, as a middleware
technique which can act between user and search engines to deliver more personalised results.
These personalising filters, apply one or more of the popular techniques for search result
personalisation, such as the category concept, learning from user actions and using metasearch
engines. By developing the PSA, these techniques have been investigated and
incorporated to create an effective middleware agent for web search personalisation.
In this thesis, a conceptual model for the Personalised Search Agent is developed,
implemented by developing a prototype and benchmarked the prototype against existing web
search practices. System development methodology which has flexible and iterative
procedures that switch between conceptual design and prototype development was adopted as
the research methodology.
In the conceptual model of the PSA, a multi-layer client server architecture is used by
applying generalisation-specialisation features. The client and the server are structurally the
same, but differ in the level of generalisation and interface. The client handles personalising
information regarding one user whereas the server effectively combines the personalising
information of all the clients (i.e. its users) to generate a global profile. Both client and server
apply the category concept where user selected URLs are mapped against categories. The
PSA learns the user relevant URLs both by requesting explicit feedback and by implicitly
capturing user actions (for instance the active time spent by the user on a URL). The PSA also
employs a keyword-generating algorithm, and tries different combinations of words in a user
search string by effectively combining them with the relevant category values.
The core functionalities of the conceptual model for the PSA, were implemented in a
prototype, used to test the ideas in the real word. The result was benchmarked with the results
from existing search engines to determine the efficiency of the PSA over conventional
searching. A comparison of the test results revealed that the PSA is more effective and
efficient in finding relevant and personalised results for individual users and possesses a
unique user sense rather than the general user sense of traditional search engines.
The PSA, is a novel architecture and contributes to the domain of knowledge web information
searching, by delivering new ideas such as active time based user relevancy calculations,
automatic generation of sensible search keyword combinations and the implementation of a
multi-layer agent architecture. Moreover, the PSA has high potential for future extensions as
well. Because it captures highly personalised data, data mining techniques which employ
case-based reasoning make the PSA a more responsive, more accurate and more effective tool
for personalised information searching.
|
28 |
Local and personalised models for prediction, classification and knowledge discovery on real world data modelling problemsHwang, Yuan-Chun January 2009 (has links)
This thesis presents several novel methods to address some of the real world data modelling issues through the use of local and individualised modelling approaches. A set of real world data modelling issues such as modelling evolving processes, defining unique problem subspaces, identifying and dealing with noise, outliers, missing values, imbalanced data and irrelevant features, are reviewed and their impact on the models are analysed. The thesis has made nine major contributions to information science, includes four generic modelling methods, three real world application systems that apply these methods, a comprehensive review of the real world data modelling problems and a data analysis and modelling software. Four novel methods have been developed and published in the course of this study. They are: (1) DyNFIS – Dynamic Neuro-Fuzzy Inference System, (2) MUFIS – A Fuzzy Inference System That Uses Multiple Types of Fuzzy Rules, (3) Integrated Temporal and Spatial Multi-Model System, (4) Personalised Regression Model. DyNFIS addresses the issue of unique problem subspaces by identifying them through a clustering process, creating a fuzzy inference system based on the clusters and applies supervised learning to update the fuzzy rules, both antecedent and consequent part. This puts strong emphasis on the unique problem subspaces and allows easy to understand rules to be extracted from the model, which adds knowledge to the problem. MUFIS takes DyNFIS a step further by integrating a mixture of different types of fuzzy rules together in a single fuzzy inference system. In many real world problems, some problem subspaces were found to be more suitable for one type of fuzzy rule than others and, therefore, by integrating multiple types of fuzzy rules together, a better prediction can be made. The type of fuzzy rule assigned to each unique problem subspace also provides additional understanding of its characteristics. The Integrated Temporal and Spatial Multi-Model System is a different approach to integrating two contrasting views of the problem for better results. The temporal model uses recent data and the spatial model uses historical data to make the prediction. By combining the two through a dynamic contribution adjustment function, the system is able to provide stable yet accurate prediction on real world data modelling problems that have intermittently changing patterns. The personalised regression model is designed for classification problems. With the understanding that real world data modelling problems often involve noisy or irrelevant variables and the number of input vectors in each class may be highly imbalanced, these issues make the definition of unique problem subspaces less accurate. The proposed method uses a model selection system based on an incremental feature selection method to select the best set of features. A global model is then created based on this set of features and then optimised using training input vectors in the test input vector’s vicinity. This approach focus on the definition of the problem space and put emphasis the test input vector’s residing problem subspace. The novel generic prediction methods listed above have been applied to the following three real world data modelling problems: 1. Renal function evaluation which achieved higher accuracy than all other existing methods while allowing easy to understand rules to be extracted from the model for future studies. 2. Milk volume prediction system for Fonterra achieved a 20% improvement over the method currently used by Fonterra. 3. Prognoses system for pregnancy outcome prediction (SCOPE), achieved a more stable and slightly better accuracy than traditional statistical methods. These solutions constitute a contribution to the area of applied information science. In addition to the above contributions, a data analysis software package, NeuCom, was primarily developed by the author prior and during the PhD study to facilitate some of the standard experiments and analysis on various case studies. This is a full featured data analysis and modelling software that is freely available for non-commercial purposes (see Appendix A for more details). In summary, many real world problems consist of many smaller problems. It was found beneficial to acknowledge the existence of these sub-problems and address them through the use of local or personalised models. The rules extracted from the local models also brought about the availability of new knowledge for the researchers and allowed more in-depth study of the sub-problems to be carried out in future research.
|
29 |
Local and personalised models for prediction, classification and knowledge discovery on real world data modelling problemsHwang, Yuan-Chun January 2009 (has links)
This thesis presents several novel methods to address some of the real world data modelling issues through the use of local and individualised modelling approaches. A set of real world data modelling issues such as modelling evolving processes, defining unique problem subspaces, identifying and dealing with noise, outliers, missing values, imbalanced data and irrelevant features, are reviewed and their impact on the models are analysed. The thesis has made nine major contributions to information science, includes four generic modelling methods, three real world application systems that apply these methods, a comprehensive review of the real world data modelling problems and a data analysis and modelling software. Four novel methods have been developed and published in the course of this study. They are: (1) DyNFIS – Dynamic Neuro-Fuzzy Inference System, (2) MUFIS – A Fuzzy Inference System That Uses Multiple Types of Fuzzy Rules, (3) Integrated Temporal and Spatial Multi-Model System, (4) Personalised Regression Model. DyNFIS addresses the issue of unique problem subspaces by identifying them through a clustering process, creating a fuzzy inference system based on the clusters and applies supervised learning to update the fuzzy rules, both antecedent and consequent part. This puts strong emphasis on the unique problem subspaces and allows easy to understand rules to be extracted from the model, which adds knowledge to the problem. MUFIS takes DyNFIS a step further by integrating a mixture of different types of fuzzy rules together in a single fuzzy inference system. In many real world problems, some problem subspaces were found to be more suitable for one type of fuzzy rule than others and, therefore, by integrating multiple types of fuzzy rules together, a better prediction can be made. The type of fuzzy rule assigned to each unique problem subspace also provides additional understanding of its characteristics. The Integrated Temporal and Spatial Multi-Model System is a different approach to integrating two contrasting views of the problem for better results. The temporal model uses recent data and the spatial model uses historical data to make the prediction. By combining the two through a dynamic contribution adjustment function, the system is able to provide stable yet accurate prediction on real world data modelling problems that have intermittently changing patterns. The personalised regression model is designed for classification problems. With the understanding that real world data modelling problems often involve noisy or irrelevant variables and the number of input vectors in each class may be highly imbalanced, these issues make the definition of unique problem subspaces less accurate. The proposed method uses a model selection system based on an incremental feature selection method to select the best set of features. A global model is then created based on this set of features and then optimised using training input vectors in the test input vector’s vicinity. This approach focus on the definition of the problem space and put emphasis the test input vector’s residing problem subspace. The novel generic prediction methods listed above have been applied to the following three real world data modelling problems: 1. Renal function evaluation which achieved higher accuracy than all other existing methods while allowing easy to understand rules to be extracted from the model for future studies. 2. Milk volume prediction system for Fonterra achieved a 20% improvement over the method currently used by Fonterra. 3. Prognoses system for pregnancy outcome prediction (SCOPE), achieved a more stable and slightly better accuracy than traditional statistical methods. These solutions constitute a contribution to the area of applied information science. In addition to the above contributions, a data analysis software package, NeuCom, was primarily developed by the author prior and during the PhD study to facilitate some of the standard experiments and analysis on various case studies. This is a full featured data analysis and modelling software that is freely available for non-commercial purposes (see Appendix A for more details). In summary, many real world problems consist of many smaller problems. It was found beneficial to acknowledge the existence of these sub-problems and address them through the use of local or personalised models. The rules extracted from the local models also brought about the availability of new knowledge for the researchers and allowed more in-depth study of the sub-problems to be carried out in future research.
|
30 |
The interplay between genes and dietary factors in the aetiology of Type 2 Diabetes MellitusLi, Sherly (Xueyi) January 2018 (has links)
To help mitigate the escalating prevalence of Type 2 Diabetes (T2D) and alleviate society of its associated morbidity and economic burden on health care, it is crucial to understand its aetiology. Both genetic and the environmental risk factors are known to be involved. Healthy diets have been proven to reduce the risk of T2D in primary prevention trials, however, which components and exact mechanisms are involved is not fully understood, in particular, the role of macronutrient intake. Body weight, glycaemic markers and T2D are all to some extent genetically regulated. There may also be genetic influences on how people digest, absorb or metabolise macronutrients. This poses the possibility that the interplay between genes and our diet may help us unravel T2D’s aetiology. The aim of this PhD was to investigate gene-diet interactions on the risk of incident T2D, focusing primarily on macronutrient intake as the dietary factor. First, I systematically evaluated the current evidence before taking a step-wise approach (hypothesis driven to hypothesis-free) to interrogate gene-macronutrient interactions. This identified 13 publications, with 8 unique interactions reported between macronutrients (carbohydrate, fat, saturated fat, dietary fibre, and glycaemic load derived from self-report of dietary intake and circulating n-3 polyunsaturated fatty acids) and genetic variants in or near TCF7L2, GIPR, CAV2 and PEPD (p < 0.05) on T2D. All studies were observational with moderate to serious risk of bias and limitations that included lack of adequate adjustment for confounders, lack of reported replication and insufficient correction for multiple testing. Second, these reported interactions did not replicate in a large European multi-centre prospective T2D case-cohort study called EPIC-InterAct. We concluded that the heterogeneity between our results and those published could be explained by methodological differences in dietary measurement, population under study, study design and analysis but also by the possibility of spurious interactions. Third, given the paucity of gene-macronutrient interaction research using genetic risk scores (GRS), we examined the interaction between three GRS (for BMI (97 SNPs), insulin resistance (53 SNPs) and T2D (48 SNPs)) and macronutrient intake (quantity and quality indicators) in EPIC-InterAct. We did not identify any statistically significant interactions that passed multiple testing corrections (p≥0.20, with a p value threshold for rejecting the null hypothesis of 0.0015 (based on 0.05/33 tests)). We also examined 15 foods and beverages identified as being associated with T2D, and no significant interactions were detected. Lastly, we applied a hypothesis-free method to examine gene-macronutrient interactions and T2D risk by using a genome-environment-wide-interaction-study. Preliminary findings showed no significant interactions for total carbohydrate, protein, saturated fat, polyunsaturated fat and cereal fibre intake on T2D. In conclusion, the consistently null findings in this thesis using a range of statistical approaches to examine interactions between genetic variants and macronutrient intake on the risk of developing T2D have two key implications. One, based on the specific interactions examined, this research does not confirm evidence for gene-diet interactions in the aetiology of T2D and two, this research suggests that the association between macronutrient intake and the risk of developing T2D does not differ by genotype.
|
Page generated in 0.1508 seconds