chyooti haqooq ya heloosenein talaash karne ke liye AI models abhi bhi jhoothi information kiya raha hai, jo ki sachay par nahi aati hai. Stanford University ki research se pata chalta hai ki AI systems bahut hi anokha tareeke se kaam karte hain. Yeh kyun hota hai kyunki unhein ek goal rakha gaya hai ki wo kisi bhi question ke jawab de saken, jabki sach meh kuch nahi pata kyon na ho.
Open AI ke researchers batate hain ki AI models ki training aur jaanch mein koi basic flaw hai. Kafi models ko jo abot dena padta hai woh jyada se zyaada right answers deta hai, lekin sach meh usse pata nahi hota kyunki. Jab koi student har sawaal ka jawab dekhna chahta hai taaki numbers mil sake, tab bhi wo wrong answers dete hain.
Santavani logon ne bataya hai ki AI models bahut hi mushkil ya gair-sambhav questions ko samjha sakte hain. yeh kyun hota hai kyunki unhein bahut se material padhana padta hai aur uske baad kuchh understand karne ka try karta hai, lekin jo material sahi nahi hai ya gair-sambhav hai woh uthane ki koshish karta hai. Jabko unhein mushkil ya gair-sambhav sawaal diye jate hain to yeh apne hi self ko jawab dene ki koshish karta hai, jisse aam taur par galat information aa jaati hai.
Investigation mein yeh bhi bataya gaya hai ki current evaluation systems pehle se hi dristata par zyada focus rakhte hain. Unhein pehli baar sirf usse pta chalta hai ki wo kya sahi jawab de raha hai, na ke unki galat ho rahi hai.
Santavani logon ne bataya hai ki agar hum AI models ko trustworthy bharosa se kamzori par saza diya jaye aur galat hone par rewards dena band kar diya jaye toh yeh unhein sahi disha mein badal sakte hain. Hum in tareeke se kuchh zaroori kadam utha sakte hain:
AI models ko trustworthy bharosa se kamzori par saza dijiye.
Galat hone pe rewards dena band kar dejiye, aur agar koi sawaal chhodna ya shikha karna hai toh inhe sahi incentives milen.
Hum in tareeke se in research ki zaroorat pe dhyan dena chahiye kyunki AI technologies mein bahut hi safalta uthayi gayi hai, lekin jhoothi information aur heloosenein ka mudda bhi abhi tak nahi hataya hai. Hum in safalta ke saath-saath in mushkil shabdon ko bhi samajhte hain kyunki AI systems par gahrayi zaroori hai taaki unhein sahi aur reliable information mil sake.
Open AI ke researchers batate hain ki AI models ki training aur jaanch mein koi basic flaw hai. Kafi models ko jo abot dena padta hai woh jyada se zyaada right answers deta hai, lekin sach meh usse pata nahi hota kyunki. Jab koi student har sawaal ka jawab dekhna chahta hai taaki numbers mil sake, tab bhi wo wrong answers dete hain.
Santavani logon ne bataya hai ki AI models bahut hi mushkil ya gair-sambhav questions ko samjha sakte hain. yeh kyun hota hai kyunki unhein bahut se material padhana padta hai aur uske baad kuchh understand karne ka try karta hai, lekin jo material sahi nahi hai ya gair-sambhav hai woh uthane ki koshish karta hai. Jabko unhein mushkil ya gair-sambhav sawaal diye jate hain to yeh apne hi self ko jawab dene ki koshish karta hai, jisse aam taur par galat information aa jaati hai.
Investigation mein yeh bhi bataya gaya hai ki current evaluation systems pehle se hi dristata par zyada focus rakhte hain. Unhein pehli baar sirf usse pta chalta hai ki wo kya sahi jawab de raha hai, na ke unki galat ho rahi hai.
Santavani logon ne bataya hai ki agar hum AI models ko trustworthy bharosa se kamzori par saza diya jaye aur galat hone par rewards dena band kar diya jaye toh yeh unhein sahi disha mein badal sakte hain. Hum in tareeke se kuchh zaroori kadam utha sakte hain:
AI models ko trustworthy bharosa se kamzori par saza dijiye.
Galat hone pe rewards dena band kar dejiye, aur agar koi sawaal chhodna ya shikha karna hai toh inhe sahi incentives milen.
Hum in tareeke se in research ki zaroorat pe dhyan dena chahiye kyunki AI technologies mein bahut hi safalta uthayi gayi hai, lekin jhoothi information aur heloosenein ka mudda bhi abhi tak nahi hataya hai. Hum in safalta ke saath-saath in mushkil shabdon ko bhi samajhte hain kyunki AI systems par gahrayi zaroori hai taaki unhein sahi aur reliable information mil sake.