Thursday, 18 December 2025

Custom AI Agents: Use the Copilot Retrieval API for grounding on SharePoint Content

If you’re building a custom AI agent (Copilot Studio, Agents SDK, or your own app) and you want permission-trimmed content from SharePoint without managing your own vector store, then the Microsoft 365 Copilot Retrieval API is a great solution. 

It’s part of the broader Copilot APIs surface in Microsoft Graph, designed to reuse the same “in-tenant” grounding behavior that Microsoft 365 Copilot uses. To know more about the Copilot APIs have a look at the Microsoft docs: Microsoft Learn

So in this post, lets deep dive into the Copilot Retrieval API:


On a high level, we’ll:

  • Decide when to use Retrieval API vs “normal” Graph CRUD/search

  • Configure the minimum delegated permissions for SharePoint grounding

  • Call the Retrieval endpoint with dataSource=sharePoint

  • Scope results to specific sites using KQL filterExpression

  • Use the returned extracts + URLs as grounding context for your LLM

Prereqs

  • A Microsoft 365 Copilot license for each user calling Copilot APIs (this is separate from standard Graph CRUD usage).

  • Your app uses delegated auth (application permissions aren’t supported for this API).

  • Microsoft Entra app registration with delegated Graph permissions: Files.Read.All + Sites.Read.All (required together for SharePoint/OneDrive retrieval).

  • Familiarity with the Copilot APIs model (Copilot APIs are REST under Microsoft Graph and use standard Graph auth).

1) Pick The Right Tool: Retrieval vs Graph

Use Microsoft Graph CRUD when you need to read/update SharePoint data (lists, drives, items, etc.). 

Use the Copilot Retrieval API when you need ranked text content (snippets) to ground an LLM response, while keeping content in place and respecting permissions/sensitivity labels.

A good heuristic:

  • “Give me the document metadata / file bytes” → Graph sites/drives/items

  • “Answer using the most relevant parts of our HR policies” → Retrieval API

2) Set Delegated Permissions

The Retrieval API is delegated-only. That’s intentional: the service retrieves snippets from content the calling user can access, permission-trimmed at query time.

Minimum permissions for SharePoint grounding:

  • Files.Read.All

  • Sites.Read.All 


3) Test Quickly In Graph Explorer

Fastest way to validate your tenant + permissions is Graph Explorer.

Click path: Graph Explorer → Sign in → Modify permissions → Consent → Run query



4) Call Retrieval For SharePoint (With Site Scoping)

Here’s the core call. You provide:

  • queryString (single sentence, up to 1,500 chars)

  • dataSource = sharePoint

  • optional filterExpression (KQL) to scope sites

  • optional resourceMetadata

  • maximumNumberOfResults (1–25) 

Change these values:
  • queryString: the user’s natural-language question

  • filterExpression: one site, or multiple sites using OR

  • resourceMetadata: only the metadata fields you want returned

  • maximumNumberOfResults: keep within 1–25

Minimal working example

Run the PowerShell call above and expect a response with retrievalHits[], each containing:

  • webUrl

  • extracts[] with text + relevanceScore

  • optional resourceMetadata and sensitivityLabel

Troubleshooting

  • 403 / access denied: confirm the signed-in user has a Copilot license (Copilot APIs require it).

  • No results: remove filterExpression first, then add it back (KQL path scoping is easy to over-constrain).

  • 400 bad request: maximumNumberOfResults must be 1–25, and queryString has constraints (single sentence, 1,500 chars).

  • Trying application permissions: not supported—switch to delegated auth.

  • Using /beta in production: move to v1.0 (beta can change and isn’t supported for production).

Notes

  • The Retrieval API is built to avoid “DIY RAG plumbing” (export/crawl/index/vector DB) while still honoring Microsoft 365 security and compliance boundaries.

  • Use filterExpression with path:"<site url>" to keep grounding tight (single site or multiple sites with OR).

  • You typically pass the returned extracts.text + webUrl into your model prompt, and keep the URLs as citations in your UI.

Wrapping up

The Copilot Retrieval API is the most pragmatic way to pull permission-trimmed SharePoint grounding content into your own agents without building or hosting a parallel index. Once you can reliably retrieve relevant extracts, everything else becomes “just orchestration” around your model and UX.

Hope this helps!

Sunday, 2 November 2025

Bring third party data into Declarative Agents for M365 Copilot (using TypeSpec)

In my previous post, Getting Started: Declarative Agents With TypeSpec for Microsoft 365 Copilot we saw the basics of Declarative Agents and how they are the sweet spot between no-code and pro-code agents. 

In this post, we will see how we can integrate third-party APIs in Declarative Agents. Declarative Agents can call third-party APIs through “actions” you describe in TypeSpec. The agent decides when to invoke your action, passes parameters, and you choose how the results render in chat (e.g Adaptive Cards). We’ll be plugging in a public weather API to show the end-to-end pattern.



On a high level, we’ll:

  • Scaffold a Declarative Agent in VS Code.

  • Add two actions: city → coords, then coords → weather.

  • Bind results to Adaptive Cards for clean output.

  • Test with natural prompts like “Weather in Paris, France”.

Prereqs

  • Microsoft 365 tenant + VS Code with the Microsoft 365 Agents Toolkit extension.

  • TypeSpec packages from the scaffold.

  • Public APIs (no auth):

    • Geocoding: https://geocoding-api.open-meteo.com/v1/search?name={city}&count=1

    • Weather: https://api.open-meteo.com/v1/forecast?latitude={lat}&longitude={lon}&current=temperature_2m,wind_speed_10m

1) Scaffold a TypeSpec Declarative Agent

Click path: VS Code → Microsoft 365 Agents Toolkit (sidebar) → Create a New Agent/AppDeclarative Agent (TypeSpec) → Finish. 


2) Add the TypeSpec files (agent + actions)

Update the main.tsp file:

And the actions.tsp file containing the action details

3) Add Adaptive Cards

Create adaptiveCards/ and add:

adaptiveCards/location-card.json

adaptiveCards/weather-card.json


4) Build & test

  • In Agents Toolkit: Provision.

  • Open the web test for your agent and try:

Prompts

  • “What’s the weather in Seattle right now?”

  • “Show me the weather in Paris, France.”

  • “Get weather for latitude 40.7128 and longitude -74.0060.”

Expected

  • The agent calls searchCity (shows Location card) → then getWeather (shows Weather card with °C, wind, time, timezone)



Minimal working example

Prompt: “Weather for Pune.”

Flow: searchCity(name="Pune") → top match → getWeather(latitude, longitude, current="temperature_2m,wind_speed_10m", location="Pune, IN")


Output: Two cards; final card shows temperature (°C), wind (km/h), time, and timezone.

Wrapping up

Declarative Agents make third-party API calls feel native: describe the action, let the agent orchestrate, and render the result with Adaptive Cards. 

Hope this helps!

Saturday, 11 October 2025

Getting started: Declarative Agents with TypeSpec for Microsoft 365 Copilot

Declarative agents let you add focused skills to Microsoft 365 Copilot without spinning up servers or long-running code. Compared to Copilot Studio agents (full or Lite), which are great for UI-first orchestration and built-in connectors, declarative agents are repo-friendly, schema-driven artifacts you version, review, and ship like code. 

You describe instructions and capabilities in TypeSpec and the M365 Agents toolkit compiles that into a manifest and handles provisioning.

So in this post, we’ll use the TypeSpec starter in the Microsoft 365 Agents Toolkit and light up a couple of built-in capabilities.


On a high level, we’ll:

  • Install the Microsoft 365 Agents Toolkit and TypeSpec bits. 

  • Scaffold a Declarative Agent (TypeSpec) project in VS Code. 

  • Add capabilities (OneDrive/SharePoint search + People + Code Interpreter). 

  • Provision and test the agent from the toolkit.

Prereqs

  • Microsoft 365 tenant (Developer or test tenant recommended) and permission to create app registrations.

  • Visual Studio Code with Microsoft 365 Agents Toolkit extension. 

  • TypeSpec for Microsoft 365 Copilot (installed automatically by the toolkit starter). 


1) Create a TypeSpec Declarative Agent

  1. Open VS Code.

  2. From the sidebar, open Microsoft 365 Agents ToolkitCreate a New Agent/AppDeclarative AgentStart with TypeSpec for Microsoft 365 Copilot.

  3. Name it (e.g., ContosoFinder) and choose the default folder.

  4. In the Lifecycle pane, select Provision to create the Azure AD app and resources in your tenant.


2) Understand the project

The starter includes:

  • main.tsp (your agent definition)

  • Build scripts that compile TypeSpec → agent manifest JSON

  • Toolkit tasks for Provision, Deploy, Publish 



3) Add core capabilities in TypeSpec

Open main.tsp and define instructions plus capabilities. Here’s a compact example that enables OneDrive/SharePoint, People, and CodeInterpreter for simple Python math/CSV ops:

Change these values: title, instructions text, conversation starters, search resultLimit, and Code Interpreter timeout as needed. Capability names/params align with the built-in catalog.

4) Build & validate

  • In the Agents Toolkit Lifecycle pane: select Provision which will start the build, validate and deploy process.

5) Test in Microsoft 365 Copilot (tenant)

  • Once the Provisioning succeeds

  • Try prompts like:

    • “Find my last 5 QBR files” → You should see a list of SharePoint/OneDrive files with links.

    • “Who are my peers?” → The agent returns people details from the Graph.

    • “Summarize this CSV and plot top 3 values” → Code Interpreter runs Python and returns a brief summary + chart image. 



Notes

  • Prefer built-in capabilities (Search, People, OneDrive/SharePoint, Teams Messages, WebSearch, Code Interpreter) before adding custom APIs. They’re simpler and tenant-aware.

  • When you need external systems, add an API plugin to your declarative agent via TypeSpec; plugins are actions inside declarative agents (not standalone in Copilot).

  • Great learning paths: the “Build your first declarative agent using TypeSpec” module and recent open labs/samples. 

Wrapping up

With TypeSpec, declarative agents become a clean, versionable artifact you can build, validate, and ship from VS Code. Start with built-in capabilities, keep instructions focused, and only plug external APIs when the scenario demands it. 

Hope this helps!

Sunday, 5 October 2025

Enable Code Interpreter in Copilot Studio (Full Version)

Code interpreter in Copilot Studio is a built in runtime that lets Copilot write and run short Python code in a secure sandbox. It can read files that you upload, perform calculations, build charts, transform data, and return the outputs inline or as downloadable files. Typical uses include quick analysis of CSV or Excel data, data cleaning, format conversions, and simple visualizations, all driven by natural language prompts. More information here.

In the full Copilot Studio experience you enable it at the environment level and then turn it on for specific prompts so your agent can choose when to use code for better answers.

So in this post, let’s enable the new Code interpreter capability in the full version of Copilot Studio and use it from a prompt. We’ll keep it small: flip the right admin switch, add a prompt as a tool in an agent, and verify with a quick run. 

This is not about Copilot Studio Lite/Agent Builder (that already exposes a simple toggle), this walkthrough is for the full Copilot Studio experience.


On a high level, we’ll:

  1. Turn on Code interpreter for your environment in the Power Platform admin center (PPAC).

  2. In Copilot Studio (full), add a Prompt tool to an agent and enable Code interpreter in the prompt’s settings.

  3. Test with a couple of natural‑language requests that execute Python under the hood.

Prereqs: You’ll need access to PPAC (or an admin), a Copilot Studio environment, and a Microsoft 365 Copilot or Copilot Studio license in the right tenant. If you see a message like “Code interpreter is disabled for your environment or tenant”, it usually means step 1 wasn’t completed.

1) Enable at the environment level (admin step)

In Power Platform admin center:

  1. Go to Copilot → Settings.

  2. Under Copilot Studio, open Code generation and execution in Copilot Studio.

  3. Select your environment, choose On, and Save.



That switch unlocks Python‑based execution for prompts and agents in that environment. If you manage multiple environments (Dev/Test/Prod), repeat per environment.

2) Add a prompt to an agent and enable Code interpreter

In Copilot Studio (full):

  1. Open your agentTools tab → New toolPrompt.

  2. In the prompt editor, select … → Settings.

  3. Toggle Enable code interpreterSave/Close.

Tip: This is easy to miss the toggle lives in the prompt’s Settings, not in the agent’s capabilities. If the toggle is missing or disabled, double‑check step 1 or your permissions.


 

3) A minimal prompt

Create a new prompt with the following Instructions:

You are a helpful assistant that can use Code interpreter when it’s the best tool for the task.
If the user asks to analyze data, perform calculations, transform files, 
or generate charts,
write and run Python code with safe defaults.
Explain what you ran and summarize the output clearly for a non‑technical audience.

Add Inputs:

  • question (Text)

Test ideas:

  • “Simulate compound growth at 8% for 10 years and plot the curve.”

When you select Test, you should see the system take two passes: first it plans, then it generates and executes Python, returning results (and charts) inline.



Now you can start using this prompt tool just like any other tool in your agent!

Troubleshooting

  • Toggle isn’t visible: The prompt’s settings show Code interpreter only when the environment is enabled in PPAC.

  • Blocked by policy: Some tenants restrict code execution. Check with your admin if the PPAC toggle is greyed out.

  • Host differences: In‑context agents that run inside other hosts may have limitations when Code interpreter is on. Test in your target host early.

  • Quotas: Code execution may be subject to usage limits. If runs are throttled, try again later or reduce dataset size.

Notes

  • In Copilot Studio Lite / Agent Builder, you’ll find Code interpreter under Configure → Capabilities as a simple toggle. This post focuses on the full Copilot Studio flow, where the setting lives inside each Prompt.

  • If you build agents with the Agents Toolkit/VS Code, you can also declare CodeInterpreter in the agent manifest (schema v1.2+). That’s a different path but useful for source‑controlled agents.

Wrapping up

That’s the minimal path: enable at PPAC → toggle in the Prompt settings → test with a simple data task. From here, stitch the prompt into your agent’s flow, pass inputs from variables, and add guardrails (size limits, safe defaults).

Hope this helps!

Sunday, 2 February 2025

Building an Agent for Microsoft 365 Copilot: Adding an API plugin

In the previous post, we saw how to get started building agents for Microsoft 365 Copilot. We saw the different agent manifest files and how to configure them. There are some great out of the box capabilities available for agents like using Web search, Code Interpreter and Creating Images.

However, a very common scenario is to bring in data from external systems into Copilot and pass it to the LLM. So in this post, we will have a look at how to connect Copilot with external systems using API Plugins.

In simple terms, API Plugins are AI "wrappers" on existing APIs. We inform the LLM about an API and give it instructions on when to call these APIs and which parameters to pass to them. 

To connect your API to the Copilot agent, we have to create an "Action" and include it in the declarativeAgent.json file we saw in the previous post:

The ai-plugin.json file contains information on how the LLM will understand our API. We will come to this file later.

Before that, let's understand how our API looks. We have a simple API that can retrieve some sample data about repairs:

Next, we will need the OpenAI specification for this API

The OpenAPI specification is important as it describes in detail the exact signatures of our APIs to the LLM.

Finally, we will come to the most important file which is the ai-plugin.json file. It does several things. It informs Copilot which API methods are available, which parameters they expect and when to call them. This is done in simple human readable language so that the LLM can best understand it.

Additionally, it also handles formatting of the data before it's shown to the user in Copilot. Whether we want to use Adaptive Cards to show the data or if we want to run any other processing like removing direct links from the responses.

If you are doing plugin development, chances are that you will be spending most of your time on this file.

Once everything is in place, we will run our plugin with simple instructions and this is how it will fetch the data from external database


Hope this helps!

Monday, 6 January 2025

Building an Agent for Microsoft 365 Copilot Chat

Microsoft 365 Copilot Chat is a powerful, general-purpose assistant that greatly improves personal productivity, helping users manage tasks like emails, calendars, document searches, and more.

However, the true potential of M365 Copilot lies in its extensibility. By building specialized "vertical" agents on top of M365 Copilot, you can unlock team productivity as well as automate business processes. These custom agents not only help in individual productivity, but also help in building workflows across groups of people.

Agents for Microsoft 365 Copilot leverage the same robust foundation—its orchestrator, foundation models, and trusted AI services—that powers M365 Copilot itself. This ensures consistency, reliability, and security at scale.

So in this post, let's take a look at how to build Agents on top of Microsoft 365 Copilot. 

We will be building a Declarative Agent with the help of the Teams toolkit. Before we start, we need the following prerequisites:

A Microsoft 365 Copilot license

Teams Toolkit Visual Studio Code extension

Enable side loading of Teams apps

Once everything is in place, we will go to Visual Studio Code 

In the Teams Toolkit extension, To create an agent, we will click on "Create new app"

Then, click on "Agent"


When the agent is created, we see a bunch of files getting created as part of the scaffolding.  So let's take a look the difference moving pieces of the agent:

manifest.json

If you have been doing M365 Apps (and Teams apps) for a while, you are familiar with this file. This is the file which represents our Agent in the M365 App catalog. It contains the various details like name, description and capabilities of the app.

However, you will notice the new property in this file which is "copilotAgents" This property will be pointing to a file containing the description of our new declarative agent. So let's look at how that file looks next:

declarativeAgent.json

Lots of interesting things are happening here. 

 
First the "instructions" property is pointing to a file which will contain the "System prompt" of our agent. We will have a look at this file later. 

Next, the "conversation_starters" property contains ready to go prompts which the user can ask the agent. Our agent will be trained to respond to these prompts. This is so that the user is properly onboarded when they land on our agent.

Finally there will be the actions and capabilities properties: 

Actions property contains connections to external APIs which the agent can invoke.

Capabilities property contains the different out of the box "Tools" which we want to allow in our agent. E.g. SharePoint and OneDrive search, image creation, code interpreter to generate charts etc 

We will talk about both these properties in more details in subsequent blog posts.

instruction.txt

And finally we have the instructions file where we can specify the system prompt for the agent. Here, we can guide the agent and assign it personality. We can make it aware of the tools and capabilities it has available and when the use them. We can provide one shot or few shot examples to "train" the agent on responding to users.

Once all files are in place, you can click "Provision" from the Teams toolkit extension:

And our new agent will be ready!

Here is how the agent will provide the conversation starters when we first lang on it:


Simple conversation using Web search capability:


Conversation using Web search and Code Interpreter capabilities:



Hope this was helpful! We will explore M365 Copilot Agents more in subsequent blog posts.

Monday, 9 December 2024

Search SharePoint and OneDrive files in natural language with OpenAI function calling and Microsoft Graph Search API

By now, we have seen "Chat with your documents" functionality being introduced in many Microsoft 365 applications. It is typically built by combining Large Language Models (LLMs) and vector databases. 

To make the documents "chat ready", they have to be converted to embeddings and stored in vector databases like Azure AI Search. However, indexing the documents and keeping the index in sync are not trivial tasks. There are many moving pieces involved. Also, many times there is no need for "similarity search" or "vector search" where the search is made based on meaning of the query. 

In such cases, a simple "keyword" search can do the trick. The advantage of using keyword search in Microsoft 365 applications is that the Microsoft Search indexes are already available as part of the service. APIs like the Microsoft Graph Search API and the SharePoint Search REST API give us "ready to consume" endpoints which can be used to query documents across SharePoint and OneDrive. Keeping these search indexes in sync with the changes in the documents is also handled by the Microsoft 365 service itself.

So in this post, let's have a look at how we can combine OpenAI's gpt-4o Large Language Model with Microsoft Graph Search API to query SharePoint and OneDrive documents in natural language. 

On a high level we will be using OpenAI function calling to achieve this. Our steps are going to be:

1. Define an OpenAI function and make it available to the LLM.  


2. During the course of the chat, if the LLM thinks that to respond to the user, it needs to call our function, it will respond with the function name along with the parameters.

3. Call the Microsoft Graph Search API based on the parameters provided by the LLM.

4. Send the results returned from the Microsoft Graph back to the LLM to generate a response in natural language.

So let's see how to achieve this. In this code I have used the following nuget packages:

https://www.nuget.org/packages/Azure.AI.OpenAI/2.1.0

https://www.nuget.org/packages/Microsoft.Graph/5.64.0

The first thing we will look at is our OpenAI function definition:

In this function we are informing the LLM that if needs to search any files as part of providing the responses, it can call this function. The function name will be returned in the response and the relevant parameter will be provided as well. Now let's see how our orchestrator function looks:

There is a lot to unpack here as this function is the one which does the heavy lifting. This code is responsible for handling the chat with OpenAI, calling the MS Graph and also responding back to the user based on the response from the Graph. 

Next, let's have a look at the code which calls the Microsoft Graph based on the parameters provided by the LLM. 

Before executing this code, you will need to have created an App registration. Here is how to do that: https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app 

Since we are calling the Microsoft Graph /search endpoint with delegated permissions, the app registration will need a minimum of the User.Read and Files.Read.All permissions granted. https://learn.microsoft.com/en-us/graph/api/search-query?view=graph-rest-1.0&tabs=http

This code get the parameters sent from the LLM and uses the Microsoft Graph .NET SDK to call the /search endpoint and fetch the files based on the searchQuery properties. Once the files are returned, their summary value is concatenated into a string and returned to the orchestrator function so that it can be sent again to the LLM. 

Finally, lets have a look at our CallOpenAI function which is responsible for talking to the Open AI chat api.
 
This code defines the Open AI function which will be included in our Chat API calls. Also, the user's search query is sent to the API to determine if the function needs to be called. This function is also called again after the response from the Microsoft Graph is fetched. At that time, this function contains the details fetched from the Graph to generate an output in natural language. This way, we can use Open AI function calling together with Microsoft Graph API to search files in SharePoint and OneDrive.

Hope this helps!