For those that have worked with Citrix, MicroApps might ring a bell? The product they had allowed you to create Microapps, which was these small components that was running API calls to the backend-service and making it easy to present these on a front-end.
While that product didnt work out great for Citrix, I loved the idea and the concept behind it.
Because in most cases I have so many apps available to me, and while some vendors focus on creating portals to unify applications access to make it easier. The easiest is to just remove the app entirely, in fact I hate having access to multiple apps.
I just want access to that SPECIFIC action/function I want to do inside that app. Might be that I have access to this large LOB (Line-of-business) application where 99% of the UI or features is not something I need access to, I just need access to do certain smaller things to do my job, therefore having a way for me just have access to that small part is a HUGE timesaver.
This is where MCP (Model Context Protocol) is going to play an important part. Which many claim it to be the USB-C connector for AI to Apps and data, I consider it more to be a proxy that makes service functionality from 100+ vendors available trough natural language.
While you could do this for a long time without MCP just using function calls in GPT. However, there is a number of things you need to understand when building that integration. In most cases you can use an OpenAPI specification and secondly the service needs to do an API call directly to the service. Or you can define the API call directly.
For instance just to do a function call to get the weather from a third party API using GPT Function calls would look something like this
import requests
def get_weather(latitude, longitude):
response = requests.get(f"https://api.open-meteo.com/v1/forecast?latitude={latitude}&longitude={longitude}¤t=temperature_2m,wind_speed_10m&hourly=temperature_2m,relative_humidity_2m,wind_speed_10m")
data = response.json()
return data['current']['temperature_2m']
from openai import OpenAI
import json
client = OpenAI()
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current temperature for provided coordinates in celsius.",
"parameters": {
"type": "object",
"properties": {
"latitude": {"type": "number"},
"longitude": {"type": "number"}
},
"required": ["latitude", "longitude"],
"additionalProperties": False
},
"strict": True
}
}]
messages = [{"role": "user", "content": "What's the weather like in Paris today?"}]
completion = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=tools,
)
So this means that we would need to build up multiple of these to allow function calls against multiple different endpoints. Or as mentioned we can use OpenAPI specification which allows us to also more easily connect to multiple services but still requiring the function specification ( as mentioned here –> Function calling with an OpenAPI specification | OpenAI Cookbook)
Now this can be quite cumbersome when you want to integrate with a bunch of different services and also to ensure proper communication from an LLM app and different services.
MCP it is a open source protocol developed by Anthropic Introducing the Model Context Protocol \ Anthropic which is aimed at becoming the de facto standard protocol for how AI assistants communications to the systems where data lives.
It uses the concept of an MCP server which is responsible to actually communicating with the backend services, and abstracting away all the complexity of writing the different functions.
The core idea is that each server can represent a set of APIs/Actions for solution or service. So for instance there can be a MCP server containing the definition for how to connect to a file server or another to a cloud service which could be Cloudflare.
Within the MCP host you can have one or multiple MCP clients that talk to a MCP server for a specific action.
Then you have the MCP servers which can be NPX applets or Docker Containers. That run on the same machine that the MCP host is running on. FYI: The definition server is a bit confusing since everything runs on the same machine.

The ecosystem of supported data sources is growing quite large already modelcontextprotocol/servers: Model Context Protocol Servers as you can see in this Github there are many providers now that have their own MCP server. There is also a long list of different providers here –> Smithery – Model Context Protocol Registry

To utilize the MCP protocol you need a MCP Host, which is most cases is the Claude Desktop App or Github Copilot. For instance installing Cloudflare MCP server on my desktop is just installing a node package.
After installing using the command (npx @cloudflare/mcp-server-cloudflare init) you get a bunch of different actions available in the desktop app.

This allows you interact with multitude of different services using natural language

I can of course have hundres of different servers and tools available trough MCP, and the language model then tries to decide which MCP tool to use depending on the prompt. So the upside is that with MCP servers, the companies can create their MCP servers that contains the different APIs and services that they want to make available, therefore making it easier for us to consume services without writing all the function calls.