top of page
Blog
Writer's pictureIryna Yamborska

How does Salesforce address the security challenges of Generative AI for CRM? The Einstein Trust Layer is a solution

OpenAI released ChatGPT in November 2022 and attracted worldwide attention with its capability to deal with challenging language understanding and generation tasks in conversations. OpenAI became a sensation and revolutionized the approach of AI to human-model interaction.


ChatGPT is based on so-called large language models (LLM) and relies on unsupervised deep learning algorithms - neural networks trained on massive datasets (near enough to the entire open internet). In enhancing ChatGPT's capacity to address diverse prompts in a secure, logical manner, it underwent optimization using a method known as reinforcement learning with human feedback.


So, ChatGPT can learn - not big news. And it seemed like the world was more than okay with that. But in March 2023, Italy blocked ChatGPT usage. The National Data Protection Authority has restricted access to the resource due to concerns about privacy issues. The Italians made such a decision due to the leak of users' personal data. As a result, information about requests and bank card numbers was freely available.


In March 2023, Salesforce launched Einstein GPT - the first-ever generative AI for CRM. Einstein GPT was created in partnership with Open AI. Hence, Einstein GPT is a ChatGPT-like tool for CRM. Salesforce Inc. offers its' users to rely clients personal data management on generative AI. But, considering the Italy case, is it really safe? Could Einstein GPT "learn" from the communication with Salesforce CRM users and then share all the key points with someone else?


Learn from the article how Salesforce Inc. handles generative AI security issues with the Einstein Trust Layer.


What is Einstein GPT?


Salesforce Einstein stands out as an advanced AI technology smoothy and non-obtrusively integrated into the Salesforce products. Using data collected from users' actions, Einstein presents predictive analytics, natural language processing, and machine learning capabilities. By studying historical data for specific parameters, Einstein AI constructs data models trained on extensive datasets, ensuring its predictions and recommendations remain consistently updated.

In a pioneering announcement on March 7, 2023, Salesforce introduced Einstein GPT, marking the world's first generative AI for CRM. A collaborative effort with OpenAI, Einstein GPT ensures the creation of personalized content for various Salesforce clouds, including Tableau, MuleSoft, and Slack.


Einstein GPT

So, how is Einstein GPT used in practice?


Einstein GPT strengthens Salesforce's existing AI models by generating data sourced from partner ecosystems via the Salesforce Data Cloud. This real-time user data unification helps Salesforce users easily craft "trusted" content within their Salesforce CRM environment. Consider these examples:


  • Effortlessly automate the process of sending emails to colleagues and clients.

  • Elevate customer support services by delivering accurate and targeted responses.

  • Transform specific datasets into visually compelling representations.

  • Streamline the code generation for developers designing online customer journeys.

How did Einstein GPT get rid of the security-based vulnerability?

Generative AI can be a double-edged sword for businesses. On the one hand, it can help businesses to increase efficiency and generate human-like text and creative content. On the other hand, one of the security problems common with LLMs is that users may upload confidential data there. However, LLMs use the inputs they receive to further train the models. LLMs are not designed to be secure vaults. They may expose confidential data in response to queries from other users.


But how to entrust "business secrets" to something that can share them with anyone? LLMs are not similar to databases: they learn data, not store it. Hence, you can't just enter information and set up access control. Such a method won't work with LLM. Users can't manage how the data comes out of the LLM. That was the key problem solved with the Einstein GPT Trust Layer.


Einstein Trust Layer


Salesforce's Einstein trust layer is a protector that stands between the corporate data stored in CRM and LLM. Salesforce promises that with a trust layer, such data will never leave Salesforce through LLM. They developed a zero-retention architecture, so none of the personal data will ever be saved in LLM. Users can simply delete the request when they are done with it. And it will not be stored. 


With the appearance of the trust layer, Salesforce introduced data masking. When the users enter some personally identifiable information, they can just mask it, and the data will be presented to generative AI as a character set.


Also, the first step in the "before generation process" is secure data retrieval. It means that Einstein GPT integrates with the company's current security architecture, confirming users' access in strict accordance with their data permissions. This maintains the org's security in all engagements with Einstein GPT, providing a foundation of assurance for your data security.


Conclusion


Large Language Models might be a great support for businesses. Generative AI can create emails, specify target advertisements, and provide analytics; the list of prons can be almost never-ending. But there also might be a huge con that eliminates everything good generative AI brings to the business. It's a data leakage concern. To have clients' PII shared through the LLM is a substantial reputation loss for the company. Hence, businesses need to ensure that their confidential information or trade secrets are not shared with ChatGPT, as they could be released to the public. 


But here's what: in collaboration with Open AI, Salesforce offers their customers Einstein GPT - the first generative AI for CRM. How can users effectively interact with Einstein GPT with this considerable security issue on their minds? Salesforce has the answer. The answer is the Einstein Trust Layer - a barrier between the proprietary information and LLM. Trust Layer masks the data, and verifies users, and Salesforce ensures that customer data will never leave the Salesforce cloud products through the LLM.




Oleg Minko, CTO at Sparkybit
Oleg Minko, CTO at Sparkybit

Let's unlock the full potential of Salesforce for your business


Our team has successfully implemented and customized Salesforce for many businesses, resulting in increased revenues and greater customer satisfaction. Let's talk about your case!





Comments


Commenting has been turned off.
bottom of page