Responsible & Safe AI Systems Week 1 || NPTEL ANSWERS || MYSWAYAM
Автор: MY SWAYAM
Загружено: 2025-07-13
Просмотров: 3398
Описание:
Responsible & Safe AI Systems Week 1 || NPTEL ANSWERS || MYSWAYAM #nptel #nptel2025 #myswayam
📝 YouTube Description:
🤖 Course: Responsible & Safe AI Systems – Week 1
📅 Session: July–October 2025
👨🏫 Instructors:
Prof. Ponnurangam Kumaraguru – IIIT Hyderabad
Prof. Balaraman Ravindran – IIT Madras
Prof. Arun Rajkumar – IIT Madras
📚 Course Code: NOC25-CS118
🗓️ Exam Date: 26 October 2025 (IST)
🎯 This video features the Week 0 assignment answers for NPTEL's cutting-edge course on Responsible & Safe AI Systems. Explore how AI impacts society and how to build transparent, fair, and safe intelligent systems. Learn about the risks of Generative AI (like ChatGPT, DALL·E, Sora), adversarial attacks, model interpretability, and emerging global AI regulations.
🧠 What You’ll Learn in the Course:
Toxicity, Bias & Adversarial Risks in AI Models
Principles of Responsible AI: Fairness, Interpretability, Privacy
Adversarial Examples, Model Attacks (Trojans, Poisoning)
Explainability Tools (LIME, SHAP, GradCAM)
DPDP Act (India), GDPR, EU AI Act, US Policies
AI Applications in Legal, Health, Education domains
Panel Discussions & Policy Insights
👥 Perfect For:
CS/AI/ML Students & Researchers
Industry Professionals in AI/ML Development
Product Managers, Policy Analysts
Anyone interested in Ethical & Safe AI
📥 Download All Week Assignment Answers:
👉 https://forms.gle/hZ3NsFDdJQz1BHd98
🔗 Assignment Link:
👉 https://onlinecourses.nptel.ac.in/noc...
📲 Join Our NPTEL Learning Network:
💬 WhatsApp Group:
👉 https://chat.whatsapp.com/Cu48TpJSAwA...
📢 Telegram Channel:
👉 https://t.me/+T1eT6JLPsSNkZWNl
📸 Instagram:
👉 / my_swayam
🖥️ Website (Coming Soon)
🔖 SEO Tags for Better Reach
#ResponsibleAI #SafeAISystems #NPTEL2025 #AIRegulations #GenerativeAI #MySwayam #PK #Ravindran #ChatGPT #Llama #SoraAI #AIExplainability #NPTELAnswers
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: