Edit

Share via


Frequently asked questions for Copilot in Fabric in the SQL database workload (Preview)

Applies to:SQL database in Microsoft Fabric

This article answers frequently asked questions about Microsoft Copilot in Fabric in SQL database.

Copilot is now integrated with the query editor, enhancing the management and operation of SQL-dependent applications. It improves productivity by offering natural language to SQL conversion and self-help for database administration.

Where in SQL database in Fabric can I find Copilot?

In the SQL Query editor, you can find Copilot at various locations:

How do I enable Copilot for my SQL database in Fabric?

Review the steps in Enable Copilot in Fabric.

Are the results from Copilot reliable?

Copilot is designed to generate the best possible responses with the database context it has access to. However, like any AI system, responses aren't perfect. All of Copilot's responses should be carefully tested, reviewed, and vetted before making changes to your Fabric SQL database environment.

Does Copilot write perfect or optimal queries?

Copilot aims to provide accurate and informative responses based on the available data. The answers generated are based on patterns and probabilities in language data, which means that they might not always be accurate. Humans should carefully review, test, and validate all content generated by Copilot.

How does Copilot use data from my Fabric SQL database environment?

Copilot generates responses grounded in your Fabric SQL database environment. Copilot only has access to resources that you have access to and can only perform analysis and actions that you have the permissions to perform. Copilot respects all existing access management and protections.

What data does Copilot collect?

User-provided prompts and Copilot's responses aren't used to further train, retrain, or improve Azure OpenAI Service foundation models that generate responses. User-provided prompts and Copilot's responses are collected and used to improve Microsoft products and services only when users give explicit consent to include this information within feedback. We collect user engagement data, such as, number of chat sessions and session duration, the skill used in a particular session, thumbs up, thumbs down, feedback, etc. This information is retained and used as described in the Microsoft Privacy Statement.

What should I do if I see unexpected or offensive content?

The SQL database team built Copilot guided by our AI principles and Responsible AI Standard. We have prioritized mitigating exposing customers to offensive content. However, you might still see unexpected results. We're constantly working to improve our technology in preventing harmful content. For more information, see Privacy, security, and responsible use of Copilot in Fabric.

If you encounter harmful or inappropriate content in the system, you can provide feedback or report a concern by selecting the downvote button on the response. We're constantly working to improve our technology to proactively address issues in line with our responsible AI principles.

How much does Copilot cost?

To learn more about how the Fabric Copilot usage is billed and reported, see Copilot in Fabric consumption.

How is the transmitted prompt and query data protected?

Copilot takes several measures to protect data including:

  • The transmitted data is encrypted both in transit and at rest; Copilot-related data is encrypted in transit using TLS, and at rest using Microsoft Azure's data encryption (FIPS Publication 140-2 standards).

  • Access to log and feedback data is strictly controlled. The data is stored in a separate subscription. The data is only accessible by 1) Just-In-Time JIT approval from Azure operations personnel using secure admin workstations.

Where can I learn more about privacy and data protection?

For more information on how Copilot processes and uses personal data, see Privacy, security, and responsible use of Copilot in Fabric.