¿Es seguro preguntar a ChatGPT dónde invertir? La CNMV advierte sobre los riesgos

Is it safe to ask Chatgpt where to invest? CNMV warns about risks

Artificial intelligence is revolutionizing multiple sectors, and the world of investment is no exception. However, blindly trusting tools such as chatgpt to make financial decisions can be a recipe for disaster. This is warned by the National Securities Market Commission (CNMV), which has joined a campaign of the European market supervisor (ESMA) to alert investors about the dangers of relying on AI in their investment strategies.

According to ESMA, many tools based on artificial intelligence can generate inaccurate information, since their answers are built from data that could be outdated, incomplete or even incorrect. “The AI ​​tools that offer investment ideas can mislead investors, generating unrealistic or inappropriate data,” says the European agency.

Can AI replace a financial advisor?

The AI ​​boom has led some entities to offer automated investment products. Through questionnaires, algorithms determine the user’s risk profile and propose an asset allocation without the need for human interaction. Although these services usually have a certain degree of supervision, the problem arises when private investors resort to general use tools – such as Chatgpt – in search of investment recommendations without any filter or regulation.

The ESMA emphasizes that in the financial markets the certainties are few and that trusting in predictions generated by AI can be dangerous. “Distrust of websites and applications that affirm their ability to accurately predict the future prices of values,” warns the entity.

More unprotected investors without guarantees

One of the main risks indicated by the CNMV is the lack of protection for investors that follow the IA recommendations. If something goes wrong, there is no possibility of resorting to a financial defender or a conflict resolution mechanism. “Online tools accessible to the public have no obligation to act in their interest,” emphasizes the regulator.

In addition, given the experimental nature of many artificial intelligence models, not even their own