Following criticism over the accuracy of its AI tools, Microsoft is now cautioning users against putting too much trust in its services.
The company has unveiled an updated Service Agreement indicating its AI is best seen as guidance rather than a replacement for professional advice.
The updated agreement, which is set to come into play at the end of next month, also contain warnings over its Health Bots, among concern users are placing too much trust in the advice being delivered.
Microsoft says AI is no replacement for professionals
Microsoft’s revised terms specifically address the limitations of its Assistive AI: “AI services are not designed, intended, or to be used as substitutes for professional advice.”
The company added Health Bots “are not designed or intended as substitutes for professional medical advice or for use in the diagnosis, cure, mitigation, prevention, or treatment of disease or other conditions.”
The updates reflect the increased adoption of AI tools in recent months following the introduction of tools like ChatGPT and subsequent criticism about accuracy, data security, and privacy.
The agreement also reiterates that Copilot AI Experiences, governed by Bing’s Terms of Use, should not be used for extracting data through methods like scraping or harvesting unless expressly permitted by Microsoft.
Moreover, the updates impose stricter rules on the reverse engineering of AI models and enforce other protective measures: “You may not use the AI services to discover any underlying components of the models, algorithms, and systems.”
Microsoft also bans using its AI data to create or train other AI services.
While updates were made for other Microsoft services, the revisions to its AI terms are a sign of the company responding to liability concerns and more clearly managing user expectations. They also serve as a gentle reminder that AI technologies are unlikely to replace humans any time soon.
+ There are no comments
Add yours