While some people seem to worry about the damage AI could do to humanity as a whole, many big tech firms are more concerned about what these external platforms could do with their sensitive data. OpenAI is closely partnered with Microsoft and it makes sense that the company’s closest rivals would be extra cautious with its products. Reportedly, employees were using the model to streamline a variety of tasks, including writing emails and producing reams of code. Apple has notoriously tight security, and would likely prefer it if its customer data and classified product info are not being entered into a program a close rival is actively invested in.
Likewise, Samsung is one of the companies that has banned the use of external generative AI in its workforce, doing so after discovering that some employees had shared “sensitive code” with the platform, according to Bloomberg. That report alleges based on a leaked internal memo that Samsung was concerned about its data being stored on a third-party server outside of its own control.
It is worth noting that OpenAI recently added additional privacy options. Users can now turn off their chat histories and demand their entries aren’t used to train the language model. However, enabling these options doesn’t make your data 100% private. OpenAI claims is it still monitoring all chats “for abuse.” It’s unclear what this means exactly, but it likely refers to messages that may break the rules, which are the ones that quickly turn orange or red. Similarly, all of the data is still kept on file for 30 days before being deleted.
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest gaming News Click Here