In a controversial move, Slack has been training the models it uses for its generative AI functionalities on user messages, files, and more, by default and without the explicit consent from users.
Instead (per Engadget) those wishing to opt out must do so through their organization’s Slack admin, who must email the company to put a stop to data use.
The revelation that potentially sensitive information is being used to train Slack’s AI highlights the darker sides of the technology – generative AI has already come under fire for failing to correctly cite sources and its potential for generating content that could be subject to copyright infringements.
Slack criticized for using customer data to train AI models
An extract from the company’s privacy principles page reads:
“To develop non-generative AI/ML models for features such as emoji and channel recommendations, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement.”
Another passage reads: “To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com…”
The company does not provide a timeframe for processing such requests.
In response to uproar among the community, the company posted a separate blog post to address concerns arising, adding: “We do not build or train these models in such a way that they could learn, memorize, or be able to reproduce any customer data of any kind.”
Slack confirmed that user data is not shared with third-party LLM providers for training purposes.
TechRadar Pro asked Slack’s parent company, Salesforce, to clarify a few details, but the company did not immediately respond.
+ There are no comments
Add yours