inner-banner-bg

Journal of Current Trends in Computer Science Research(JCTCSR)

ISSN: 2836-8495 | DOI: 10.33140/JCTCSR

Impact Factor: 0.98*

State-of-the-Art and Future Directions

Abstract

Mohammad Salman Khan and Ayesha Imran

In our daily lives, smart AI-powered devices including computers are becoming more common. But the problem is that sometimes these AI systems make decisions that seem like magic, making it hard for us to know how they work and also, we are not able to predict the future (like after making a decision, we are not able to predict whether the result will be positive or negative). This research paper is all about fixing this issue and making the AI strong enough to make a decision and learn each positive and negative thing from the surroundings. We dive into the world of Explainable Artificial Intelligence (XAI), finding new and simple ways to make complex AI systems easy for people to understand. In this Research paper, we are also going to make the AI that much more powerful so that it can learn itself and gather information from its surroundings. This would be a fantastic step for AI to learn Patterns, Structures, and Unlabeled data allowing it to identify hidden relationships and features without explicit guidance. This would be Self-Supervised Learning in which the AI would be able to create its labels or predictions based on input data, generating a form of supervision. This system would be able to do facial detection and compare the Raw info with the DataSet. The system would also be able to make the most suitable and accurate decisions. With time this machine would be able to upgrade its memory (taking a human baby as an example/consideration). In a nutshell, our research aims to make AI systems not just powerful, but also easy to understand and trustworthy for everyone and make them Self-Supervised.

PDF