Vitalik Buterin Warns AI Tools Can Be a Privacy Threat to Users



All articles are carefully reviewed and reviewed by leading blockchain experts and industry experts.
  • Vitalik Buterin has warned that many AI tools could pose a serious privacy threat because they rely on remote devices and access to data.
  • He added that the risks extend beyond just major languages ​​to external services, data leaks and jailbreak attacks that can push systems against users’ wishes.

Vitalik Buterin has raised a new alarm about artificial intelligence, this time focusing less on magic and more on mystery.

In a new blog postThe Ethereum co-founder said that many AI tools are built on remote infrastructures that can access the private information of users, creating risks that many people do not see enough when typing in a chatbot, providing services or coordinating an external service. The concern, as he describes it, is not limited to one model or one program. It is tied.

Remote AI infrastructure creates a private environment

Buterin’s point is straightforward. An increasing number of AI products rely on infrastructure that resides outside of the user’s device and outside of the user’s control. This means that information, files, account information and usage patterns can be passed through systems that can store, process or reuse data in ways they did not intend.

He warned that the problem does not stop with large linguistic models. External services connected to these systems can present their own problems, from data overload to unauthorized use of personal information. In other words, accidents are not just examples. That’s the whole chain around it.

This is important because AI is increasingly being marketed as an asset to financesoftware, communication and information on the Internet. The more useful it is, the more confidential it is.

Jailbreaks turn AI into an assistant

Buterin also referred to prison attacks as a real threat. These attacks use external inputs to manipulate the model to behave against its preferences, making the agent a reliable and harmless object.

That warning comes at a time when AI tools are getting closer to execution, not just conversation. As these systems access messages, wallets, documents and transactions automatically, a failure of privacy can also be a failure of operation.

What Buterin is showing here is a change in risk. AI is no longer a question of capability. It becomes a question of confidence limits, who controls the data, where the model goes, and what happens when those limits fail.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *