A study by an Israeli firm Team8 got widely picked up by media outlets because of the concerns it raises about corporate secrets and customer information.
As one report says:
“The report said that companies using such tools may leave them susceptible to data leaks and laws. The chatbots can be used by hackers to access sensitive information. Team8’s study said that chatbot queries are not being fed into the large language models to train AI since the models in their current form can’t update themselves in real-time. This, however, may not be true for the future versions of such models, it added.”
Bloomberg News covered the study first and is said to have received it “prior to its release.” As the Bloomberg report says:
Major technology companies including Microsoft Corp. and Alphabet Inc. are racing to add generative AI capabilities to improve chatbots and search engines, training their models on data scraped from the Internet to give users a one-stop-shop to their queries. If these tools are fed confidential or private data, it will be very difficult to erase the information, the report said.
Read the complete Bloomberg report on the Team8 study here.