inner-banner-bg

Journal of Electrical Electronics Engineering(JEEE)

ISSN: 2834-4928 | DOI: 10.33140/JEEE

Impact Factor: 1.2

Research Article - (2026) Volume 5, Issue 1

A Comprehensive Evaluation of the Strengths and Weaknesses of Different AI Algorithms in Delivering Tailored User Experiences Based on Empirical Evidence

Joseph Foley *
 
Munster Technological University, Ireland
 
*Corresponding Author: Joseph Foley, Munster Technological University, Ireland

Received Date: Dec 09, 2025 / Accepted Date: Jan 19, 2026 / Published Date: Jan 27, 2026

Copyright: ©2026 Joseph Foley. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Citation: Foley, J. (2026). A Comprehensive Evaluation of the Strengths and Weaknesses of Different AI Algorithms in Delivering Tailored User Experiences Based on Empirical Evidence. J Electrical Electron Eng, 5(1), 01-08.

Abstract

This paper presents a comprehensive evaluation of artificial intelligence algorithms utilized to deliver personalized user experiences across digital platforms. This research utilises empirical evidence from recent studies to evaluate the performance, strengths, and limitations of collaborative filtering, content-based filtering, deep learning methods, and hybrid systems. The findings indicate that although AI-driven personalisation enhances user engagement and satisfaction, significant challenges remain, including cold-start problems, algorithmic bias, privacy concerns, scalability limitations, and the emergence of filter bubbles. This study highlights key gaps in existing research and suggests directions for developing more ethical, transparent, and effective personalisation systems.

Keywords

Artificial Intelligence, Personalisation Algorithms, Collaborative Filtering, Deep Learning, User Experience, Algorithmic Bias, Privacy

Introduction

The rapid expansion of digital information has introduced significant challenges in effectively connecting users with relevant resources. Recommender systems have become essential for mitigating information overload by filtering content and delivering personalised experiences. The integration of artificial intelligence (AI) and machine learning (ML) has transformed personalisation, allowing platforms to analyse extensive user data and provide highly tailored recommendations [1].

AI-driven personalisation in e-commerce has demonstrated improvements in user engagement, satisfaction, and purchase intention through advanced techniques implemented across digital marketing channels. Major platforms such as Netflix, Amazon, Spotify, and various social media networks have made substantial investments in AI-powered recommendation systems, acknowledging their essential role in user retention and revenue generation [2].

Despite these advancements, AI-driven personalisation introduces significant ethical challenges, such as data privacy concerns, algorithmic bias, and transparency issues that may erode consumer trust. This paper systematically evaluates various AI algorithms used for personalisation, assessing their technical capabilities, performance metrics, and limitations based on empirical evidence from recent studies.

Literature Review

A. Evolution of Personalisation Systems

Personalisation methods have progressed from reliance on singular rule-based methodologies to the implementation of advanced, integrated artificial intelligence approaches. Early personalisation systems relied on predetermined rules set by domain experts, resulting in limited flexibility and an inability to address individual user preferences [3,4].

Before the widespread adoption of artificial intelligence, e-commerce personalisation primarily relied on rule-based systems and basic segmentation techniques such as demographic targeting and purchase history analysis. The introduction of collaborative filtering in the 1990s marked a significant paradigm shift, allowing personalisation systems to utilise collective user behaviour patterns to generate recommendations [5].

B. Current State of AI Personalisation

By 2024, predictive analytics will be essential for businesses aiming to gain a competitive advantage in understanding customer behaviour. AI-driven systems enable marketers to anticipate customer actions. Contemporary personalisation strategies include content recommendations, dynamic pricing, interface customisation, and predictive customer service [6]

Empirical research on Chinese e-commerce platforms, based on 1,097 valid questionnaires, demonstrates that AI personalised recommendations significantly influence click intentions and consumer behaviour. This large-scale study confirms that personalised recommendation technology provides more accurate and diverse options to consumers, thereby increasing click-through rates and sales performance [7].

Methodology

This research employs a comprehensive literature review methodology, synthesising empirical findings from peerreviewed journal articles, conference proceedings, and industry reports published between 2020 and 2025. The evaluation framework examines algorithms across multiple dimensions:

• Technical Architecture: Underlying computational mechanisms and data processing approaches

• Performance Metrics: Accuracy, precision, recall, F1score, scalability, and computational efficiency

• User Experience Impact: Engagement rates, satisfaction scores, and conversion metrics

• Ethical Considerations: Privacy protection, bias mitigation, and transparency

• Practical Limitations: Implementation challenges, resource requirements, and contextual constraints

Evaluation of AI Algorithms

I. Technical Foundations: Collaborative filtering leverages similarities between users and items to generate recommendations, enabling serendipitous suggestions by identifying items preferred by users with analogous preferences. This method is grounded in the principle that users exhibiting similar historical behaviour patterns are likely to share future preferences (Google for Developers, 2024).

Collaborative filtering is typically categorised into two primary approaches:

• Memory-Based Approaches: directly compute similarities between users or items using historical interaction data. User- based collaborative filtering identifies users with similar preferences and recommends items they have favoured. In contrast, item-based approaches suggest items identical to those a user has previously interacted with [8].

• Model-Based Approaches: Model-based collaborative filtering employs machine learning techniques, such as matrix factorisation or neural networks, to identify patterns in user behaviour data and predict future preferences. These methods develop latent factor models that represent underlying structures in user-item interactions [9].

II. Strengths: Collaborative filtering relies exclusively on user interaction data, eliminating the need for feature extraction. This approach offers high novelty by generating recommendations based on behaviours observed across the user base, leading to serendipitous discoveries. The system leverages community patterns to deliver highly personalised suggestions [10].

Collaborative filtering is particularly effective for unstructured content such as music or videos, as it performs well with abstract data where extracting content features is challenging. This characteristic makes collaborative filtering especially valuable for entertainment platforms, where subjective user preferences are predominant.

SVD-based techniques have demonstrated strong performance in empirical studies. Specific, straightforward algorithms have achieved more than a 20% improvement in accuracy over traditional user-based approaches under sparse data conditions, while maintaining over 90% coverage [11].

III. Weaknesses: The cold start problem arises from data sparsity, as collaborative filtering methods depend on users’ historical preferences. New users must provide sufficient ratings before the system can accurately model their preferences. Likewise, newly introduced items require a substantial number of user ratings before they can be effectively recommended.

Data sparsity in large datasets diminishes prediction accuracy because user-item interactions become increasingly infrequent. As platforms expand to millions of users and items, computational demands rise significantly, resulting in scalability challenges.

Specific algorithms can unintentionally reinforce a richget-richer effect for popular products, thereby hindering optimal consumer- product matches by overlooking items with limited historical data. This popularity bias reduces diversity and disadvantages niche content.

Model-based algorithms pose several challenges because many models are highly complex and require the estimation of numerous parameters, thereby increasing sensitivity to data changes. Additionally, constructing and updating these models demands significant computational resources.

IV. Empirical Performance Comprehensive experiments using popular metrics revealed weaknesses of many collaborative filtering algorithms in extracting information from user profiles, especially under sparsity conditions. The research, which compared different techniques using the MovieLens dataset, demonstrated significant performance variations based on data density.

B.Content-Based Filtering

I. Technical Foundations: Content-based filtering recommends items by analysing item characteristics and matching them to user preference profiles. This method relies on detailed information about both item characteristics and user preferences, utilising content features such as keywords, tags, and attributes [12].

The system constructs user profiles by extracting features from items users have previously interacted with and subsequently recommends new items that share similar characteristics. This supervised machine learning approach employs classifiers to differentiate, for each user, between items likely to be of interest and those that are not [13].

II. Strengths: Content-based filtering addresses the coldstart problem more effectively than collaborative filtering because it relies on item characteristics instead of historical user interactions. New items can be recommended as soon as their features are extracted, without requiring prior user rating history.

This approach provides greater transparency in recommendations, as the system can explain item suggestions based on feature matches. Such interpretability fosters user trust and allows for the refinement of recommendations through explicit feedback [14].

Content-based systems also mitigate the popularity bias inherent in collaborative approaches because recommendations are generated based on content similarity rather than collective behaviour patterns. This facilitates the identification of niche items that align with user preferences [15].

III. Weaknesses: The model generates recommendations solely based on users’ existing interests, which restricts its capacity to introduce novel content (Google for Developers, 2024). Consequently, this limitation fosters filter bubbles in which users are predominantly exposed to content similar to their previous consumption.

Content-based filtering typically recommends items closely resembling those previously viewed by the user, thereby limiting novelty. Additionally, it encounters challenges in processing abstract data such as humour, sarcasm, or nuanced artistic expressions [16].

Items must be machine-analyzable, a requirement that poses significant challenges for retrieving multimedia information because of discrepancies between machine and user perceptions of content. Although manual attribute assignment by humans can enhance accuracy, it is not feasible for large-scale applications. This approach requires advanced feature extraction techniques, and its performance depends heavily on the quality and completeness of item metadata. In emerging domains or for novel content types, creating effective feature representations requires substantial domain expertise.

IV Empirical Performance: Comparative studies using different metrics, including efficiency, cost, and data accessibility, found that content-based filtering performs better in settings with rich item metadata and clear user preferences, particularly in new-item introduction scenarios.

C. Deep Learning Approaches

I. Technical Foundations: Recommender systems employ a range of deep learning algorithms, encompassing traditional models such as restricted Boltzmann machines, autoencoders, and generative adversarial networks, as well as contemporary architectures including deep attention networks, large language models, and graph neural networks [17]. The introduction of transformers by Vaswani and colleagues fundamentally advanced the understanding of sequence processing, marking a significant breakthrough in natural language processing and enabling new applications in personalisation systems (MDPI 2024). Deep learning architectures for personalisation offer several key capabilities:

• Feature Learning: Deep neural architectures are capable of extracting latent user-item features, modelling nonlinear user- item interactions, and scaling effectively to support product recommendations.

• Sequential Modelling: Recurrent neural networks are highly effective for addressing sequential and temporal challenges in recommendation systems, as their architecture leverages internal memory to process and predict subsequent inputs in a sequence.

• Multi-Modal Integration: Contemporary deep learning systems process diverse data types such as text, images, audio, and behavioural signals concurrently, facilitating more comprehensive user profiling and enhanced recommendation generation [18].

II. Strengths: Deep learning systems utilising neural networks continuously analyse user interactions in real time. This capability enables dynamic adjustment of recommendations based on evolving user preferences, thereby addressing the static limitations of traditional systems [19].

Deep learning addresses the cold start problem by leveraging pretrained models and unsupervised learning on large, diverse datasets. These approaches enable recognition of general patterns and transfer of knowledge from other sources, even when specific data for new users or products is unavailable.

Empirical research involving 300 university students demonstrated that adaptive learning platforms powered by deep learning significantly improved student engagement and academic achievement. These platforms also fostered self-directed learning through personalised content adaptation.

Deep learning models capture complex, nonlinear relationships in user behaviour that traditional algorithms often overlook. These models identify correlations between user preferences and external factors such as time of day, weather, or visual cues from images, enabling tailored recommendations that reflect each user’s evolving tastes.

III. Weaknesses: Deep learning models often exhibit high complexity due to the large number of parameters that must be estimated, resulting in increased sensitivity to variations in input data. Furthermore, constructing and updating these models as new data become available demands significant computational resources.

Deep learning approaches are frequently described as "black boxes," which complicates the interpretation and explanation of recommendation decisions. This opacity in many AI systems hinders understanding of decision-making processes and limits accountability for outcomes, thereby reducing trust in these technologies [20].

Despite the use of advanced optimisation techniques, researchers must meticulously adjust hyperparameters to obtain only incremental improvements in accuracy. Enhanced performance in deep neural networks generally necessitates extensive computational resources and specialised hardware accelerators, which are associated with high energy consumption.

The lengthy training times associated with deep learning models extend validation cycles, while the need for large datasets may not be feasible in specific application domains. Additionally, these models are vulnerable to adversarial attacks and may reinforce biases inherent in the training data [21].

IV. Empirical Performance: Quantitative analysis employing performance metrics such as accuracy, precision, recall, and F1-score indicates that personalised learning models based on convolutional and recurrent neural networks achieve high accuracy in predicting student learning outcomes. Experiments involving more than 1,000 trained deep learning models demonstrate that personalisation performance is correlated with fairness. Specifically, higher overall task and personalisation performance are associated with lower standard deviation among individuals.

D. Hybird Systems

I. Technical Foundations: Hybrid recommender systems integrate content-based methods with collaborative filtering to mitigate individual limitations and capitalise on complementary strengths. Integration is achieved through strategies such as weighted combinations, switching mechanisms, and cascade approaches [22].

Hybrid model integration involves combining diverse models and deep learning architectures to enhance performance and scalability, thereby facilitating the development of more effective recommender system approaches.

II. Strengths: The integration of collaborative and content-based approaches through hybrid systems and deep learning architectures constitutes the current state of the art. This integration addresses the limitations of individual methods while leveraging their complementary strengths [23]. Hybrid systems address the cold- start problem by utilising content features when collaborative data is limited. When sufficient interaction data is available, these systems leverage collaborative patterns to provide serendipitous recommendations. This adaptability supports robust performance across diverse data conditions. Hybrid models address complex problems and improve outcomes through techniques such as model stacking. In this approach, multiple deep learning models are trained independently and their outputs are combined to form ensembles.

III. Weaknesses: Hybrid systems increase the complexity of system design, implementation, and maintenance. Identifying optimal weighting schemes or switching criteria between algorithms often demands extensive experimentation and significant domain expertise (Burke 2002). The computational overhead of executing multiple algorithms concurrently can be considerable, potentially limiting real-time performance in large- scale applications. Additional integration challenges occur when combining algorithms that have differing data requirements, output formats, and update frequencies [24]. The assumptions underlying various models may not align with actual data, and in practice, many theoretical models are not applicable to real-world datasets, which can result in inaccurate recommendations.

IV. Empirical Performance: Research evaluating hybrid models from a business perspective, assessing efficiency, cost, and revenue generation, demonstrated that hybrid approaches generally outperform individual algorithms in practical applications

Ethical Considerations and Limitations

A. Privacy and Data Protection

Extensive data collection for AI system development presents significant ethical challenges, particularly regarding privacy and security. In the absence of robust safeguards, personal information is vulnerable to misuse or loss, which can result in privacy breaches and unauthorised access [25]. The phenomenon of "privacy fatigue" arises from the widespread use of digital platforms and the increasing consumer dependence, compelling individuals to disclose personal information under the urgency and pressure created by the accelerated pace of digital interactions (MDPI 2024). AI-driven personalisation algorithms employed by online platforms for targeted advertising raise concerns regarding the collection and use of sensitive user data without sufficient consent or transparency. The EU AI Act establishes requirements for high-risk AI systems, such as transparency, bias detection, and human oversight. These regulations encourage businesses to adopt proactive measures, including implementing AI ethics policies and Privacy-Enhancing Technologies [26].

B. Algorithmic Bias and Discrimination

Algorithmic bias that perpetuates discrimination remains a significant concern because AI systems are trained on historical data. When training data reflects societal prejudices, algorithms can maintain or amplify these biases, resulting in discriminatory outcomes. AI-powered hiring systems trained on historical data can inadvertently learn and propagate biases against specific demographic groups. Additionally, facial recognition algorithms trained primarily on data from light-skinned individuals demonstrate higher error rates when applied to individuals with darker skin tones. Biases in AI systems can result in discrimination against individuals and groups. Discriminatory analytics may contribute to self-fulfilling prophecies and the stigmatisation of targeted groups, thereby undermining their autonomy and participation in society [27].

C. Transparency and Explainability

The opacity of complex AI algorithms impedes understanding of their decision-making processes and the identification of potential biases and errors, both of which are critical for ensuring accountability. Transparency is fundamental for fostering consumer trust in AI-driven personalisation, as it enables individuals to understand how their data is utilised and how personalised experiences are generated [28]. Many adaptive algorithms in AI evolve continuously, often to a degree that even their developers cannot fully explain the outcomes produced, thereby undermining accountability.

D. Filter Bubbles and Information Diversity

Personalisation limits the diversity of information available to users by excluding content considered irrelevant or contradictory to their beliefs or preferences. This reduction in information diversity poses a challenge, as such diversity is regarded as an essential condition for autonomy. Filter bubbles hinder exposure to diverse viewpoints. AI personalisation algorithms utilise extensive user data to tailor content, which can influence user perception, decision- making, and behaviour in opaque and potentially unethical ways. The emergence of "filter bubbles" or "echo chambers" is evident when users receive information that exclusively reinforces their existing views. This phenomenon underscores the importance of balancing content relevance with information diversity [29].

E. User Autonomy and Manipulation

Autonomy in decision-making is compromised when preferred outcomes reflect third-party interests rather than those of the individual, as AI systems can influence behaviour by filtering information. Research indicates that information advantage alone can justify algorithmic dependence, even in the absence of search costs or time-inconsistent preferences, resulting in negative effects on users’ independent decision-making and learning [30]. The potential for manipulation constitutes a significant ethical concern in AI-driven personalisation, as such systems can influence perception, decision-making, and behavior.

F. Digital Literacy and Knowledge Gaps

A study involving 1,213 Czech respondents and employing fuzzy logic revealed significant demographic disparities in digital media literacy. The findings underscore the urgent need for targeted educational programs that address personalisation processes. Understanding of personalised content varies significantly across social strata, and many users remain unaware of the influence of AI algorithms on their online experiences. This knowledge gap undermines both informed consent and user agency [31].

Research Gaps and Limitations

A. Evaluation Standardization

A universally accepted method for evaluating collaborative filtering algorithms has not yet been established, and comparative analyses of different strategies remain limited despite over a decade of research producing numerous algorithms. Existing evaluation metrics primarily emphasise accuracy, often overlooking critical dimensions such as diversity, serendipity, coverage, and user satisfaction. The absence of standardised benchmarks impedes meaningful comparisons across various algorithmic approaches [32].

B. Context-Awareness Limitations

Most personalisation algorithms demonstrate a limited understanding of the contextual factors that influence user preferences. Effective personalisation should incorporate contextual variables such as device type and user location, enabling tailored product recommendations and marketing messages across websites, applications, and social media platforms. Current systems inadequately address temporal dynamics, social context, emotional states, and task-oriented goals. Advancing algorithms that adapt to evolving contexts necessitates the use of more sophisticated modelling approaches.

C. Long-Term Impact Assessment

While privacy, fairness, and polarisation have been extensively examined, algorithmic dependence and its subsequent effects on users’ learning remain comparatively underexplored. Few studies investigate the long-term effects of personalisation on user behaviour, preference formation, and information consumption patterns. Comprehensive longitudinal research is necessary to understand how sustained exposure to personalised content influences cognitive processes and decision-making [33].

D. Cross-Domain Personalization

Current research predominantly addresses single-domain applications, such as e-commerce, video streaming, and news, while cross-domain personalisation—where user preferences and behaviours extend across multiple contexts—remains underexplored. The development of unified frameworks capable of transferring knowledge between domains constitutes a significant research opportunity [34].

E. Scalability-Quality Trade-offs

With the growth in the number of users and items, traditional collaborative filtering algorithms encounter substantial scalability challenges. Addressing the needs of tens of millions of customers requires advanced optimisation strategies. Balancing real-time responsiveness with high-quality recommendations at scale remains an unresolved issue. Continued research into distributed algorithms, incremental learning, and efficient data structures for largescale personalisation is necessary [35]..

F. Adversarial Robustness

This study advances the policy debate by elucidating the foundations of algorithmic dependence, providing insights that are increasingly pertinent in light of rising concerns regarding adversarial AI. Personalisation systems are susceptible to multiple forms of attack, including profile injection, shilling, and data poisoning. Although research on adversarial machine learning for recommender systems is emerging, it remains in an early stage and requires further advancement [36].

G. Multimode Integration Challenges

Recent developments indicate a transition toward personalised and privacy-preserving applications. In particular, personalised healthcare is advancing through the use of deep learning-based diagnostic and predictive models that are tailored to individual data (MDPI 2024). The integration of diverse data modalities, such as text, images, video, audio, and sensor data, while preserving computational efficiency and interpretability, presents substantial technical challenges.

Recommendations for Future Research

A. Ethical AI Frameworks

A foundational ethical framework for evaluating AI-driven personalisation applications is necessary. Such a framework should ensure that technological advancement is balanced with ethical considerations, including the protection of user rights and the promotion of social well-being (Rishabh Rajesh 2024). Future research must establish comprehensive ethical frameworks that incorporate fairness, transparency, privacy, and user autonomy as primary design constraints rather than secondary considerations. Effective development of these frameworks requires multidisciplinary collaboration among computer scientists, ethicists, policymakers, and social scientists [37].

B. Explainable Personalisation

Principles of fairness and transparency may be embedded into AI algorithms by designing models that prioritise fairness metrics and offer decision explanations through methods such as fairness-aware learning and model explainability. Future research should focus on developing interpretable algorithms capable of providing meaningful explanations for recommendations without compromising performance. Additionally, the investigation of interactive explanation interfaces that allow users to understand and refine their preference models is warranted [38].

C. Privacy-Preserving Personalisation

Privacy-preserving technologies, including federated learning and differential privacy, facilitate data analysis and model training while maintaining individual privacy rights and protecting sensitive information. Edge AI and federated learning play a significant role in addressing privacy concerns, especially in healthcare and finance, by supporting model training directly on devices and safeguarding sensitive data [39]. Further research on homomorphic encryption, secure multi-party computation, and privacy-preserving record linkage is necessary to support personalisation without centralising sensitive user data [40].

D. Bias Mitigation Strategies

Effective strategies for addressing bias and discrimination in artificial intelligence involve assembling diverse teams capable of recognising and preventing bias, as well as conducting regular audits of algorithms to identify and correct discriminatory patterns. Future research should establish systematic methods for detecting, measuring, and mitigating various forms of bias throughout the machine learning pipeline. Further advancement is needed in fair representation learning, causal modelling, and adversarial debiasing techniques [41].

E. User-Centric Evaluation Metrics

In addition to accuracy metrics, future research should establish comprehensive evaluation frameworks that include user-centric measures such as perceived relevance, discovery value, diversity, control, transparency, and trust. Employing mixed-methods approaches that integrate quantitative metrics with qualitative user studies can yield more nuanced insights [42].

F. Contextual and Temporal Modelling

Dynamic content significantly contributes to personalising user experiences, as technological advancements enable content to adapt to audience interactions, resulting in greater alignment with individual preferences. Future research should focus on developing advanced models that account for temporal dynamics, contextual influences, and the evolution of user preferences. The integration of reinforcement learning, contextual bandits, and sequential decision-making frameworks may further improve adaptive personalisation [43].

Conclusion

This comprehensive evaluation demonstrates that, although AI algorithms have significantly improved personalised user experiences, considerable challenges remain across technical, ethical, and practical domains. Collaborative filtering effectively leverages collective intelligence, yet it faces cold-start issues and scalability limitations. Content based approaches enhance transparency and manage new items efficiently, but they contribute to filter bubbles and reduce serendipity. Deep learning methods exhibit advanced pattern recognition and adaptability, but they are hindered by opacity, substantial computational demands, and the risk of bias amplification [44-50]. Hybrid systems achieve balanced performance by integrating complementary strengths, although they introduce greater complexity.

Addressing these ethical challenges necessitates multidisciplinary research to establish guidelines and frameworks for responsible AI-driven personalisation. The identified research gaps—such as evaluation standardization, context-awareness, long-term impact assessment, and adversarial robustness—require ongoing and systematic investigation.

Future research should prioritise the development of ethical, transparent, and user-centric personalisation systems that improve user experiences while upholding autonomy, privacy, and fairness. Although AI-driven personalisation can create more relevant and engaging consumer experiences, it also presents significant ethical concerns that must be addressed to ensure responsible and equitable implementation.

The integration of AI capabilities with ethical considerations, regulatory frameworks, and user empowerment will shape the future direction of personalisation technologies. Achieving success in this domain depends on collaborative efforts among researchers, practitioners, policymakers, and users to ensure that AI-driven personalisation aligns with human values and promotes societal well-being.

References

  1. Raji, M.A., A. N. & Kumar, S. (2024). ‘Consumer trust and brand equity in ai-driven personalization’. Journal of Marketing Research 61, 156–174.
  2. Lopes, M., S.-A. & Santos, R. (2024). ‘Ai personalization and sustainable consumer relationships in e-commerce’. Journal of Digital Marketing 12, 234–251.
  3. Ugandhar Dasi, Nikhil Singla, R. B. e. a. (2024). ‘Ethical implications of ai-driven personalization in digital media’. Journal of Informatics Education and Research 4.
  4. Abdullah, N.H., C. J. & Kumar, V. (2024), ‘Ethical implications of ai-driven personalization in digital marketing’, International Journal of Marketing Studies 15, 45–62.
  5. Linden, G., S.-B. & York, J. (2003b). ‘Amazon.com recommendations: Item-to-item collaborative filtering’. IEEE Internet Computing 7, 76–80.
  6. Dotdigital. (2024). ‘Top personalization trends in 2024:’,Dotdigital.
  7. Li, Y., Z.-X. & Wang, J. (2025). ‘The impact of ai-personalized recommendations on clicking intentions: evidence from chinese e-commerce’. Journal of Information Science 20, 1–18.
  8. Breese, J.S., H. D. & Kadie, C. (2013), ‘Empirical analysis of predictive algorithms for collaborative filtering’, arXiv preprint arXiv.
  9. Koren, Y., B.-R. & Volinsky, C. (2021). ‘Matrix factorizationtechniques for recommender systems. Computer pp. 30–37.
  10. Geeks for Geeks (2024b), ‘Content-based vs collaborative filtering: Difference’.
  11. Cacheda, F., C. V. F.-D. & Formoso, V. (2011). ‘Comparison of collaborative filtering algorithms: Limitations of current techniques and proposals for scalable, highperformance recommender systems. ACM Transactions on the Web 5, 1–33.
  12. Ms. Tejashri Sharad Phalle, P. S. B. (2024). ‘Content-based filtering and collaborative filtering: A comparative study’. ResearchGate (2024a) .
  13. Pazzani, M. & Billsus, D. (2007). ‘Content-based recommendation systems’. The Adaptive Web - Springer- Verlag pp. 325–341.
  14. Liang, T.P., L.-H. & Ku, Y. (2006). ‘Personalized content recommendation and user satisfaction: theoretical synthesis and empirical findings. Journal of Management Information Systems 23, 45–70.
  15. Melville, P., M.-R. & Nagarajan, R. (2001). ‘Content-boosted collaborative filtering for improved recommendations. Proceedings of the 18th National Conference on Artificial Intelligence. pp. 187–192.
  16. Geeks for Geeks (2024a), ‘Content-based filtering advantagesand disadvantages’.
  17. Shivangi Gheewala, Shuxiang Xu, S. Y. (2025). ‘In-depth survey: Deep learning in recommender systems—exploring prediction and ranking models, datasets, feature analysis, and emerging trends. Neural Computingand Applications.
  18. Schneider, J. (2019). ‘Personalization of deep learning’. arXiv preprint.
  19. Murrell, T. (2024). ‘The power of deep learning for hyperpersonalized recommendations. Shaped Blog (2024).
  20. CloudThat (2024). ‘The ethics of ai: Addressing bias, privacy, and accountability in machine learning’. CloudThat .
  21. Bender, E.M., G. T. M.-M. A. & Shmitchell, S. (2021), ‘On the dangers of stochastic parrots: Can language models be too big?’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency ACM Press, 610–623.
  22. Basilico, J. & Hofmann, T. (2004), ‘Unifying collaborative and content-based filtering’. Proceedings of the21st International Conference on Machine Learning ACM Press, 65–72.
  23. Rishabh Rajesh, Ugandhar Dasi, N. S. e. a. (2024). ‘Ethical implications of ai-driven personalization in digital media’. ResearchGate (2024b) .
  24. Adomavicius, G., S. R. S. S. & Tuzhilin, A. (2005). ‘Incorporating contextual information in recommender systems using a multidimensional approach’. IEEE Transactions on Knowledge and Data Engineering 17, 734– 749.
  25. Karami, A., S.-M., Ali Ghazanfar, M. (2024). ‘Exploring the ethical implications of ai-powered personalization in digital marketing’, Data Intelligence.
  26. CSA. (2025). ‘Ai and privacy: Shifting from 2024 to 2025”. Cloud Security Alliance.
  27. Europe Council (2024).‘Common ethical challenges in aihuman rights and biomedicine′,Council of Europe.
  28. Preprints (2024). ‘Ai-driven personalization in digital marketing: Effectiveness and ethical considerations’. Preprints.
  29. IJSRA (2025). ‘Ai-driven personalization in e-commerce: Transforming consumer behavior and brand performance’. International Journal of Science and Research Archive 16, 264–273.
  30. Rafieian, O. (2024). ‘Personalization, algorithmic dependence, and learning’.
  31. Hoffmann, C.P., L.-C. & Ranzini, G. (2024). ‘Inequalities in privacy cynicism: An intersectional analysis of agency constraints’. Big Data Society 11, 1–13.
  32. Herlocker, J.L., K.-J. T.-L. & Riedl, J. (2004). ‘Evaluating collaborative filtering recommender systems’. ACM Transactions on Information Systems 22, 5–53.
  33. Nguyen, T.T., H. P. H. F. T. L. & Konstan, J. (2014). ‘Exploring the filter bubble: The effect of using recommender systems on content diversity’. Proceedings of the 23rd International Conference on World Wide Web -ACM Press pp. 677–686.
  34. Cantador, I., F.-T. I.-B. S. & Cremonesi, P. (2015). ‘Cross domain recommender systems’. Recommender Systems Handbook. 2nd edn. Boston: Springer pp. 919–959.
  35. Linden,  G.,  S.-B.  & York,  J.  (2003a).  ‘Amazon.comrecommendations: Item-to-item collaborative filtering’. IEEEInternet Computing 7, 76–80.
  36. Mobasher, B., B.-R. B.-R. & Williams, C. (2007). ‘Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness’. ACM Transactions on Internet Technology 7.
  37. Mittelstadt, B.D., A.-P. T.-M. W. S. & Floridi, L. (2016). ‘The ethics of algorithms: Mapping the debate’. Big Data Society 3, 1–21.
  38. Miller, T. (2019). ‘Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence pp. 1–38.
  39. MDPI (2024). ‘A comprehensive review of deep learning: Architectures, recent advances, and applications’, Information 15, 755.
  40. Dwork, C. & Roth, A. (2014). ‘The algorithmic foundations of differential privacy’. Foundations and Trends in Theoretical Computer Science 9, 211–407.
  41. Mehrabi, N., M.-F. S.-N. L. K. & Galstyan, A. (2021). ‘Asurvey on bias and fairness in machine learning’. ACM Computing Surveys 54, 1–35.
  42. Knijnenburg, B.P., W.-M. G.-Z. S. H. & Newell, C. (2012)‘Explaining the user experience of recommender systems’.User Modeling and User-Adapted Interaction 22, 441–504.
  43. Li, L., C.-W. L.-J. & Schapire, R. (2010). ‘A contextual-bandit approach to personalized newsarticle recommendation’. Proceedings of the 19th International Conference on World Wide Web pp. 661–670.
  44. M. (2023). ‘Predicting trends in deep learning and neural networks in 2024’.
  45. O. (2024a), ‘Evaluating performances of contentbased and collaborative filtering in business settings. Ox Journal .
  46. S. R. (2024b), ‘Enhancing deep neural network training efficiency and performance through linear prediction’. Scientific Reports (2024) 14.
  47. (2024a), P. (2024), ‘Integrating deep learning techniques for personalized learning pathways in higher education’,PMC.
  48. (2024b), P. (2024), ‘Machine learning algorithms for personalized learning experiences. International Journal of Research in Engineering, IT and Social Sciences.
  49. W. (2025), ‘Collaborative filtering’. Wikipedia.
  50. Burke, R. (2002). ‘Hybrid recommender systems: Survey and experiments. User Modeling and User- AdaptedInteraction 12, 331–370.