Data privacy is a hot topic in the era of generative AI tools like ChatGPT, which can create realistic and engaging texts based on user inputs. However, these tools also pose a risk of exposing sensitive information to third parties. This is why a few days after OpenAI released its official ChatGPT app on the iOS store, Apple joined a growing list of companies that restricted their employees' use of ChatGPT and other third-party generative AI tools. So, what drove the tech giant and other companies to this decision? Which are the implications? Let’s explore this intriguing development together in this blog article.
For companies that handle proprietary code or deal with sensitive customer data, the use of chatbots like ChatGPT requires careful consideration. When users interact with a sophisticated language model, their input data is sent back to the model's servers, facilitating ongoing improvements and enhancements.
As mentioned before, Apple is just the latest company that decided to limit the use of ChatGPT to their employees. Early this year, Amazon has been the first to take action, specifically asking employees not to share with ChatGPT any confidential Amazon information. This decision came after the company allegedly witnessed the chatbot responses mimicking internal Amazon data.
Soon after, Walmart warned its employees not to disclose business information to the chatbot. And in early May, Apple’s competitor Samsung banned ChatGPT and most generative AIs (like Bing, and Google Bard) after an employee sent sensitive information to ChatGPT, as reported by Bloomberg.
Like other forward-thinking companies, Apple is taking proactive measures to address concerns regarding the potential risks associated with disclosing sensitive information through these innovative applications. recognizes the importance of safeguarding confidential data and wants to ensure that its employees navigate this AI landscape responsibly. Apple's decision reflects its commitment to data privacy and maintaining control over sensitive information.
While ChatGPT has garnered significant attention and popularity due to its ability to engage in meaningful conversations, generate text, and perform various tasks, there are legitimate concerns from companies and governments about maintaining data privacy and preventing unauthorized information leaks. This is particularly relevant when it comes to the workplace and employees who might access chatbots to streamline their work processes.
When utilizing generative AI models like ChatGPT, user inputs, and question histories are typically transmitted to the developer's servers for continuous platform enhancements. However, in recent months, ChatGPT experienced a temporary shutdown due to a bug that inadvertently allowed some users to view chat history headlines from others. This situation prompted the Privacy Authority (Garante) in Italy to temporarily block the chatbot and request OpenAI to implement additional measures to protect users’ data and use more transparency. After that, other countries, like Ireland, Germany, France, and the UK, are also raising concerns about how the app handles data privacy.
The tech company has always strongly emphasized privacy protection, making it no surprise that the company takes a cautious approach when it comes to third-party generative AI tools. Its worries were not limited to OpenAI’s chatbot but also extended to Microsoft’s newly launched coding assistant, Github Copilot.
Nevertheless, Cupertino continues to show a keen interest in the world of generative AI. As reported by the Wall Street Journal (WSJ) Apple is actively developing its own proprietary large-scale language model, spearheaded by John Giannandrea, a former Google executive who joined Apple in 2018. Apple's strategic acquisitions of AI-focused startups further emphasize the company's commitment to strengthening its expertise in this transformative field.
During a recent earnings call, Apple's CEO, Tim Cook, expressed his views on the progress made in the AI sector. He said Apple already has folded machine learning and AI into some of its products. While acknowledging the importance of embracing this technology, he also stated: “It's very important to be deliberate and thoughtful on how you approach these things” adding: “And there are several issues that need to be sorted ... but the potential is certainly very interesting."
The wave of restrictions on generative AI tools by many companies demonstrates their power and impact. It’s clear that while these tools can create some challenges for data privacy and security, they also offer many opportunities for users, such as creativity, learning, and fun. Companies that handle sensitive information need to be proactive and responsible when using these tools, and ensure that their employees follow the best practices and guidelines.
As generative AI becomes more accessible and sophisticated, how will we leverage its potential for good? What are the benefits and possibilities of these tools for our society? These are some of the questions that we need to explore as we go forward and enter a new era of AI-powered communication.