Government employees banned from using ChatGPT and DeepSeek AI Tools


AI Tools: The Finance Ministry of India has issued an order banning the use of AI tools and applications such as Chatgpt and Deepseek, on official government equipment. The purpose of this circular issued on 29 January 2025 is to protect sensitive government data from potential cyber threats.

Why did the government ban AI tools?

The order is signed by Joint Secretary Pradeep Kumar Singh and it states that the AI-based applications can pose security risks in the government system. In view of this, the Ministry has advised all employees to avoid using such tools on official equipment. This instruction has been sent to major government departments like Revenue, Economic Affairs, Expenditure, Public Enterprises, DIPAM and Financial Services after the approval of the Finance Secretary.

Global tendency to restrict AI tools

Security concerns about AI tools are increasing worldwide. Many governments and private companies are limiting the use of AI tools to protect sensitive data. The AI ​​models, such as Chatgpt, user, process data on the external server, causing the risk of data leaks and unauthorized access. Many global companies have also banned the use of AI tools so that confidential data can be safe.

Will this restrictions also apply to personal devices?

This order of the government has not clarified whether employees can use AI tools on their personal equipment. However, this step indicates that the government is giving priority to data security by taking a vigilant stand towards AI.

Whether or not the government can make a clear policy for AI use in future, it is still uncertain. At present, the employees of the Ministry of Finance will have to depend on traditional methods for their official work.

The main reasons for banning AI tools

Data leak threat

AI tools such as chatgpt and Deepseek process the data input by the user on the outer server. If government employees enter sensitive data on these tools, it can be stored or accessed, and there may be a possibility of misuse. Government departments handle confidential financial data, policy draft and internal communication. Inadvertently data leaks can pose a serious security risk.

AI model lack of control

The government can control traditional software, but AI tools are cloud-based and owned by private companies. For example, the owner of Chatgpt is Openai and the government has no way to know how it processes and store data. This can increase the risk of foreign intervention and cyber attacks.

Corresponding to data protection laws

India is working on strict data privacy laws like Digital Personal Data Protection (DPDP) Act, 2023. The use of rules without AI tools can cause data security policy violation. This can make government systems unsafe towards cyber threats.

This step of the government has been taken to strengthen the security of government data. However, it is not clear whether a regulated policy will be made for the use of AI tools in the future. At present, officials of the Ministry of Finance have been advised to work in traditional methods, ensuring safety of sensitive data.

Also read:

Amazing! This company giving more data in low money plan, the rest of the benefit is almost bean, recharge immediately

By Admin

Leave a Reply

Your email address will not be published. Required fields are marked *