OpenAI’s ChatGPT is being used by employees to write code, create marketing materials, and other supporting processes that save time and increase productivity. However, some large companies are prohibiting their employees from using the AI chatbot.
Some businesses are actively recruiting individuals with prior ChatGPT experience, whereas others are delaying the incorporation of ChatGPT into their employees’ daily tasks. This hesitation derives from concerns about privacy, specifically regarding the sharing of sensitive information with third parties whose intentions regarding data utilization are uncertain and not deemed trustworthy.
Privacy has long been a major concern for OpenAI.
It’s no news that in late June, OpenAI claiming that the company stole “massive amounts” of personal data to train its AI to speak like a human. To this day, OpenAI hasn’t officially responded to the allegations.
Even if you didn’t yet know that companies are being cautious about integrating ChatGPT, now you will! We compiled a list of the major ones that have banned or restricted their workers from using the AI to get you up to speed.
In May, Apple prohibited its employees from accessing ChatGPT, as well as other artificial intelligence tools such as Microsoft’s CoPilot, an automated coding tool, according to a confidential internal document reviewed by the Wall Street Journal. The iPhone maker expressed worries that the use of AI could lead to the disclosure of sensitive information, according to the Journal.
According to the Financial Times, Spotify has removed “tens of thousands” of songs created by AI-music generator Boomy as part of a “concerted effort” to protect artists’ royalties. A Spotify representative declined to comment on the restriction, saying that artificial streaming is a “long-standing industry-wide issue” that Spotify is “actively working to eliminate across our service.”
Universal Music Group:
One of the world’s largest music corporations, filed a complaint with Spotify about “suspicious streaming activity” on Boomy’s songs, according to the Financial Times.
The mobile-provider announced that ChatGPT ‘is not accessible from our corporate systems’ in an effort to limit the ‘risk of losing control of customer information’ and source code
Verizon, a telecommunications giant, said in a February press release that ChatGPT “is not available from our corporate systems” in an attempt to reduce the “risk of loss of control of customer information” and source code. In addition, Raquel Wilson, Verizon’s communications manager, wrote in the February press release “we prioritize our four stakeholders: our communities, our customers, our investors/shareholders and society,” . “We must be cautious when introducing a novel and emerging technology like a ChatGPT.”
The company has limited its employees from using ChatGPT to avoid any privacy issues with third-party connections. Also, a Wells Fargo spokesperson told Forbes that the company is “setting usage restrictions” and is looking into “safe and efficient” ways to use the chatbot throughout the company.
The well-known Bank has also banned the use of ChatGPT in the workplace to prevent its confidential data from being leaked.
While some of the investment bank’s employees can use the tools for “legal business purposes,” Verizon is “actively exploring” how they can be used in a safe way by workers,” Kevin King, Director of Corporate Communications at Verizon, wrote in an email. “As the use of Generative AI continues to grow, we’re limiting the use of Chat GPT and other tools to certain groups of work and employees,” King wrote, “so we can better understand when and how to make the most use of these rapidly changing technologies.” “We’re actively exploring how we can use ChatGPT and other Generative AI tools in a safe way,” King said. “As of February, some of our employees can use them for ‘legal business purposes.’
Amazon’s corporate lawyer warned its employees in January to not feed the chatbot with ‘any Amazon confidential information’ The move comes after Amazon learned in January that ChatGPT was spitting out “instances” of responses that “mirrored” the retail giant’s internal data.
The incident occurred around the same time some workers were using ChatGPT as a tool to write code. It was pointed out that ChatGPT could be used as “training data” for a future iteration of the technology, and “we wouldn’t want your inputs to include or mimic our confidential information.” “I’ve seen instances where ChatGPT’s output matches existing material,” the lawyer added. When asked for comment on Amazon’s restrictions on the use of AI, a spokesperson named Adam Montgomery said the company had rules in place to protect employees’ use of the technology. “We have policies in place to protect employee use of this technology, such as guidance on access to third-party generic AI services and the protection of confidential information,” Montgomery said.