Codementor Events

How to Securely Enable Tools for Agentic Workflows

Published May 06, 2025
How to Securely Enable Tools for Agentic Workflows

The traditional method of instructing computers to perform specific tasks is evolving. We're entering an era where AI agents autonomously execute tasks based on context and permissions. A recent report indicates that 93% of IT leaders plan to implement autonomous AI agents within the next two years, with nearly half having already done so. These agents' effectiveness hinges on the tools and access levels provided.

Granting excessive control can lead to unintended consequences like overwriting permissions or exposing confidential data, while too little control may hinder performance. This article explores how to optimize AI agents' utility while minimizing risks.

Agentic Emergence

Large language models have shown some promise with their text and media generation capabilities. The shift from generating text alone is not enough; taking action to deliver value took some time, but the inevitable has been realized. Text only presents facts and provides information, but the actual value lies in taking action based on the generated context. This made the AI agents concept a reality.

Agents are software that adopts and performs autonomous tasks while leveraging the available context provided by the model and tools exposed by the users. The possibilities presented by agents fueled the adoption, and organizations are heavily invested in exploring and implementing the agentic rush.

Secure Everything Mindset

Security techniques and protocols are evolving and reshaping to accommodate the GenAI solutions. However, agentic solutions are a whole different ball game. Here, we allow agents to access specified tools/services/environments/APIs to query, create, and update them accordingly. Given this level of control, and based on the fact that models hallucinate, generating misinformation/unsecured code, it is a matter of time until we witness unexpected outcomes causing data loss, misconfiguration, and unnecessary refactoring/restructuring of underlying environments/tools.

For example, in our use case, we will build an autonomous agent that interacts with Outlook clients and schedules emails with attachments based on the transcript generated during an MS Teams meeting:

  • Now with no filters or restrictions, the agent can attach confidential or irrelevant documents and send them in an informal tone.
  • The possibilities are limitless, and we can't determine what the agents will do next.
  • The best security practices for the Microsoft 365 stack focus on securing all environments, tools, and functions while enforcing least privilege access.

Model Context Protocol: Architecture and Internal Workings

Software applications return the programmatically transformed/calculated responses based on the inputs and underlying logic. LLMs are different; they generate text and media based on the knowledge and context they can access.

Model context protocol (MCP) is a new and promising way of providing context to LLMs gathered from applications. MCP has a client-server architecture where tools with predefined logic are parallelly/sequentially called based on the generated LLM response to take action.

Model context protocol.png

Source: Model Context Protocol
In a nutshell, MCP helps LLMs and AI tools communicate and connect with local data sources and remote services. MCP servers expose use-case-specific capabilities while the MCP client maintains a 1:1 connection between the components. For our use case, let us establish a connection with Microsoft Graph API, which helps us fetch the latest meeting ID, read the transcript of the latest meeting, feed it to the model as context, and perform tool calling to send the email.

Exposing Tools for Agentic Use

We will create the Emailer function and expose the function as an MCP tool for the server to trigger/call when the model completes generating the execution plan. Let’s create the base functionality that handles the API connection, fetches information, and exposes the functionality.

import asyncio
import json
import os
from datetime import datetime, timedelta
from O365 import Account, MSGraphProtocol
import requests
from mcp import ClientSession, StdioServerParameters
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent

CLIENT_ID = "your_client_id"
CLIENT_SECRET = "your_client_secret"
TENANT_ID = "your_tenant_id"

MCP_SERVER_PARAMS = StdioServerParameters(command="python", args=["emailer_server.py"])

protocol = MSGraphProtocol()
credentials = (CLIENT_ID, CLIENT_SECRET)
account = Account(credentials, auth_flow_type='client_credentials', tenant_id=TENANT_ID, protocol=protocol)

def get_most_recent_meeting_id():
   if not account.is_authenticated:
       account.authenticate()
  
   end_date = datetime.utcnow()
   start_date = end_date - timedelta(days=7)
  
   url = f"https://graph.microsoft.com/v1.0/me/calendarview?startDateTime={start_date.isoformat()}Z&endDateTime={end_date.isoformat()}Z&$filter=isOnlineMeeting eq true&$orderby=start/dateTime desc"
   headers = {"Authorization": f"Bearer {account.connection.token['access_token']}"}
   response = requests.get(url, headers=headers)
  
   if response.status_code == 200:
       events = response.json().get("value", [])
       if not events:
           raise Exception("No recent Teams meetings found.")
      
       recent_meeting = events[0]
       meeting_id = recent_meeting.get("id")
       participants = recent_meeting.get("attendees", [])
       participant_emails = [attendee["emailAddress"]["address"] for attendee in participants if "emailAddress" in attendee]
       return meeting_id, participant_emails
   else:
       raise Exception(f"Failed to fetch meetings: {response.text}")

def get_teams_transcript(meeting_id):
   try:
       url = f"https://graph.microsoft.com/v1.0/communications/callRecords/{meeting_id}"
       headers = {"Authorization": f"Bearer {account.connection.token['access_token']}"}
       response = requests.get(url, headers=headers)
       if response.status_code == 200:
           return response.json().get("transcription", "Mock transcript data")
   except Exception as e:
       print(f"Transcript fetch failed: {e}. Using mock data.")
   return "User1: Please review document1.pdf.\nUser2: I shared document2.docx."

async def process_and_send_email():
   model = ChatOpenAI(model="gpt-4o", api_key="your_openai_api_key")
  
   meeting_id, participants = get_most_recent_meeting_id()
   transcript = get_teams_transcript(meeting_id)
   print(f"Processing transcript for meeting {meeting_id}:\n{transcript[:100]}...")

   def extract_documents(transcript):
       docs = []
       for line in transcript.split('\n'):
           if "document" in line.lower() or ".doc" in line or ".pdf" in line:
               docs.append(line.split()[-1])
       return docs
  
   async with ClientSession.from_params(MCP_SERVER_PARAMS) as session:
       await session.initialize()
       tools = await session.get_tools()
      
       prompt = (
           f"Analyze this Teams meeting transcript: '{transcript}'. "
           "Extract documents mentioned and prepare an email to participants. "
           "Use the 'send_email' tool with recipients, subject, body, and attachments."
       )
      
       agent = create_react_agent(model, tools)
      
       response = await agent.ainvoke({"messages": prompt})

       documents = extract_documents(transcript)

       email_call = next((t for t in response.get("tool_calls", []) if t["name"] == "send_email"), None)
       if not email_call:
           email_args = {
               "recipients": participants,
               "subject": "Teams Meeting Summary",
               "body": f"Here's the summary of our recent Teams meeting:\n\n{transcript}",
               "attachments": documents
           }
           await session.call_tool("send_email", email_args)
           print("Manually invoked send_email tool.")
       else:
           print("Email sent via agent tool call.")

if __name__ == "__main__":
   asyncio.run(process_and_send_email())

With the base functionality in place, we can convert these functions into tools that can be used by agents through MCP. Every time a user prompts, or the trigger schedules, the agent will fetch the latest team's ID from the Microsoft graph database and pass the response from the endpoint to the model as context.

from mcp import FastMCP, ToolResult
from O365 import Account, MSGraphProtocol
import json

CLIENT_ID = "your_client_id"
CLIENT_SECRET = "your_client_secret"
TENANT_ID = "your_tenant_id"

protocol = MSGraphProtocol()
credentials = (CLIENT_ID, CLIENT_SECRET)
account = Account(credentials, auth_flow_type='client_credentials', tenant_id=TENANT_ID, protocol=protocol)

mcp = FastMCP("EmailerServer")

@mcp.tool()
def send_email(recipients: list, subject: str, body: str, attachments: list = None) -> str:

   if not account.is_authenticated:
       account.authenticate()
  
   mailbox = account.mailbox()
   message = mailbox.new_message()
   message.to.add(recipients)
   message.subject = subject
   message.body = body
  
   if attachments:
       for file_path in attachments:
           if os.path.exists(file_path):
               message.attachments.add(file_path)
           else:
               return ToolResult.error(f"Attachment not found: {file_path}")
  
   message.send()
   return f"Email sent successfully to {', '.join(recipients)}"

if __name__ == "__main__":
   mcp.run()

Now, we have the client and the server. MCP hosts like Claude Desktop or Cursor need to know where the tools are and how to run them when the agent is triggered. For every tool, we pass the abs path of scripts and the command to run them during the agentic execution flow.

{
   "mcpServers": {
       "weather": {
           "command": "uv",
           "args": [
               "--directory",
               "~/.tmp/emailer",
               "run",
               "emailer.py"
           ]
       }
   }
}

Advantages of AI Agents and When Not to Use Them

Although we are still at the initial adoption stage, AI agents are expected to supersede traditional instruction-based workflows. Knowing what they are good at and when to avoid using them can benefit organizations by avoiding regretful work and tech debt.

  1. Their adaptive problem-solving nature allows them to strategize and evolve based on trends and patterns while solving complex and challenging use cases.
  2. The exceptional ability of decentralized Agentic decision-making with a human in the loop / RLHF can boost agents' capabilities to independently operate in diverse environments.
  3. Model changing is where multiple small models can be used to make decisions and collaborate, making problem-solving efficient with promising results.

As AI agents rely on LLMs for instructions, and we know that LLMs hallucinate, agents solely relying on LLMs can cause severe damage to the organization's security posture and efficiency. If the agents are used for mission-critical or complex tasks such as air traffic control or healthcare, a bug in the LLM response can hinder the overall system. When considering AI agents, it is crucial to account for the risks involved and plan for the implementation of ethical guidelines promoting responsible AI.

Conclusion

Based on the trend and the promise that AI agents offer, there is a high probability of integrating AI agents into every conceivable use case to solve real-world problems autonomously. The possibilities are limitless, and based on the current market trend, we are on the verge of witnessing a massive restructuring of how we build and manage software solutions. The current market is moving at an alarming pace, so we should very soon understand whether AI agents show promise or if they are just another hype with potentially less promising features such as LLMs.

Discover and read more posts from Kruti Chapaneri
get started