Why Apple, Amazon, & Others Are Banning Employee Use Of Generative AI Like ChatGPT

While AI shows a lot of promise, it has also caused a lot of concern over recent months. Part of that concern relates to its ability to fill certain roles currently occupied by humans, as well as its potential to leave large sections of society out of work as a result. Articles have suggested everyone from writers to musicians to general office workers may be working with large language model-based bots in the near future. There is a second school of thought that sees AI as more of a tool than a straight replacement. Workers may be able to use the technology to cut out repetitive tasks from their day and become more productive as a result.

According to The Wall Street Journal, major companies including Apple, Amazon, JP Morgan Chase, and Verizon are restricting employee use of this emerging technology during their workday. Some of the bans, including Apple and Amazon's, reportedly aren't focused on AI in general but instead specifically target OpenAI's ChatGPT — with the companies preferring their staff use in-house AI tools instead. 

The decision comes after other leading voices in tech have spoken out against the emerging technology. One of the neural network concept's pioneers, Dr. Geoffrey Hinton, recently expressed regret about his substantial contributions to the technology's development. Other notable names in tech, including Elon Musk and Steve Wozniak, have called for a pause on AI research until lawmakers can put legal safeguards in place. But big companies seem to be restricting employee access for other reasons.

The companies aren't restricting AI out of concern for their employees

While some people seem to worry about the damage AI could do to humanity as a whole, many big tech firms are more concerned about what these external platforms could do with their sensitive data. OpenAI is closely partnered with Microsoft and it makes sense that the company's closest rivals would be extra cautious with its products. Reportedly, employees were using the model to streamline a variety of tasks, including writing emails and producing reams of code. Apple has notoriously tight security, and would likely prefer it if its customer data and classified product info are not being entered into a program a close rival is actively invested in.

Likewise, Samsung is one of the companies that has banned the use of external generative AI in its workforce, doing so after discovering that some employees had shared "sensitive code" with the platform, according to Bloomberg. That report alleges based on a leaked internal memo that Samsung was concerned about its data being stored on a third-party server outside of its own control.

It is worth noting that OpenAI recently added additional privacy options. Users can now turn off their chat histories and demand their entries aren't used to train the language model. However, enabling these options doesn't make your data 100% private. OpenAI claims is it still monitoring all chats "for abuse." It's unclear what this means exactly, but it likely refers to messages that may break the rules, which are the ones that quickly turn orange or red. Similarly, all of the data is still kept on file for 30 days before being deleted.