Microsoft has been criticized for launching a generative AI poll that asks users to predict the cause of a woman’s death. The poll was launched on the company’s Azure AI platform and has since been taken down. Critics have slammed the poll as “distasteful” and “exploitative.”
Key highlights:
- Microsoft has been criticized for launching a generative AI poll that asks users to predict the cause of a woman’s death.
- The poll was launched on the company’s Azure AI platform and has since been taken down.
- Critics have slammed the poll as “distasteful” and “exploitative.”
- Microsoft has apologized for the poll and said it was launched without proper review.
The poll, which was first reported by The Guardian, asked users to choose from a list of possible causes of death for a woman named Sarah. The options included “murder,” “suicide,” “accident,” and “natural causes.”
Critics have argued that the poll is insensitive and exploitative, as it asks users to speculate about the death of a woman they know nothing about. They have also raised concerns about the potential for the poll to be used to train AI systems to predict people’s deaths.
In a statement, Microsoft apologized for the poll and said it was launched without proper review. The company said that the poll has been taken down and that it is investigating how it was launched in the first place.
“We understand that this poll was insensitive and inappropriate, and we apologize for any offense it caused,” a Microsoft spokesperson said. “We are investigating how this poll was launched without proper review, and we are taking steps to prevent it from happening again.”
The incident has sparked a debate about the ethics of generative AI. Generative AI is a type of artificial intelligence that can be used to create new content, such as text, images, and music. It is still a relatively new technology, and there are concerns about how it could be used for harmful purposes.
Some experts have argued that the incident is a wake-up call for the tech industry. They say that companies need to be more careful about how they develop and deploy generative AI systems, and that they need to put safeguards in place to prevent them from being used for harmful purposes.
Others have argued that the incident is not a sign that generative AI is dangerous. They say that it is simply a case of a company making a mistake. They argue that generative AI has the potential to be a powerful tool for good, and that the tech industry should not be discouraged from developing it.
The debate over generative AI is likely to continue. However, the incident involving the Microsoft poll is a reminder that the tech industry needs to be thoughtful about how it develops and deploys this new technology.