This is the blog section. It has two categories: News and Releases.
Files in these directories will be listed in reverse chronological order.
This is the multi-page printable view of this section. Click here to print.
This is the blog section. It has two categories: News and Releases.
Files in these directories will be listed in reverse chronological order.
It’s 2025, and apparently, if your infrastructure isn’t running on MCP servers, are you even in tech? From stealth startups to sleepy enterprises pretending to innovate, everyone claims to be “built on MCP” — or at least wishes they were. It’s the new badge of modernity.
In this guide, I’ll show how to build an MCP-compliant server using Apache OpenServerless and our custom MCP plugin. By deploying OpenServerless and using the plugin, you can quickly expose tools via the Model Context Protocol (MCP). This setup enables fast and portable AI workflows across any cloud or on-prem environment.
Spinning up an MCP server sounds cool and it looks easy. But the real pain doesn’t start until after the “hello world” works. Because running an MCP server isn’t the challenge — it’s keeping it running and updating it.
Want to make it available on the Internet? Prepare for a joyride through SSL, firewall configs, and reverse proxies. Thinking of scaling it? That’s when the fun begins: orchestration, autoscaling, persistence, model versioning, billing — suddenly you’re less “AI pioneer” and more “distributed systems janitor.”
This is where OpenServerless with MCP truly shines: enabling fast, portable, and secure AI tool deployment with zero DevOps, seamless orchestration, and full compliance with the Model Context Protocol.
olaris-mcp
, the OpenServerless plugin to build MCP serversWe developed an Apache OpenServerless plugin, or more precisely an ops plugin for building MCP servers with Apache OpenServerless functions. A quick reminder: ops is the CLI and it supports plugins as a way to extend the CLI with new commands.
This plugin allows you to create an MCP-compliant server in a fully serverless way—by simply writing functions and publishing them to OpenServerless.
The plugin can run locally for development or be deployed to any server for production use. We support both local and public (published on the Internet) MCP servers. We will cover the latter in a future article as it enables interesting scenarios like inter-servers communications to be explored.
Note: In OpenServerless, a single MCP server consists of a number of functions, so one single MCP server is a package. It consists of a collection of tools, prompts, and resources, each represented as a distinct OpenServerless function. That means one server is always split into a number of microservices.
As we said, it’s an ops
plugin and can be installed directly using:
$ ops -plugin https://github.com/mastrogpt/olaris-mcp
To verify that the plugin has been installed correctly, run:
$ ops mcp
You should see the following usage synopsis (shortened):
Usage:
mcp new <package> [<description>] (--tool=<tool>|--resource=<resource>|--prompt=<prompt>|--clean=<clean>) [--redis] [--postgres] [--milvus] [--s3]
mcp run <package> [--sse]
mcp test <package> [--sample] [--norun]
mcp install [<package>] [--cursor] [--claude] [--5ire] [--uninstall]
mcp inspect <package> [--sse]
Let’s see in detail what the available commands do:
ops mcp new
– Create a new MCP package tool, prompt or resource.ops mcp run
– Run the specified package as an MCP server.ops mcp test
– Test the generated MCP server via CLI.ops mcp inspect
– Launch the MCP web inspector for the specified package.ops mcp install
– Install or uninstall the MCP server locally to Cursor, Claude, or 5ire environments.Let’s walk through the steps to create a simple MCP server – for example, one that provides weather information for any location in the world.
We’ll start by creating a serverless function that acts as a proxy using the following command:
$ ops mcp new demomcp --tool=weather
This command initializes a new MCP package named demomcp
and defines a tool called weather
.
Next, you’ll need to describe your MCP tool using metadata annotations. These annotations define the tool type, description, and input parameters:
#-a mcp:type tool
#-a mcp:desc "Provides weather information for a given location"
#-a input:str "The location to retrieve weather data for"
Now it’s time to implement the logic for your weather function.
You can use generative AI to get the required code quickly. For instance, the following prompt can help you generate a simple function that retrieves weather information:
AI Prompt:
A Python function get_weather(location) using requests and open-meteo.com that retrieves the given location, selects the first match, then fetches and returns the weather information for that location.
We do not include the implementation here, ChatGPT typically returns a valid and usable function.
Assuming you’ve implemented a get_weather(location)
function, you can now create a wrapper to handle MCP-style invocation:
def weather(args):
inp = args.get("input", "")
if inp:
out = get_weather(inp)
else:
out = "Please provide a location to get the weather information for."
return {"output": out}
You can deploy and test your MCP function as follows:
$ ops ide deploy demomcp/weather
ok: updated action demomcp/weather
$ ops invoke demomcp/weather
{
"output": "Please provide a location to get the weather information for."
}
$ ops invoke demomcp/weather input=Rome
{
"output": {
"location": "Rome, Italy",
"temperature": 26.0,
"time": "2025-06-22T06:45",
"weathercode": 2,
"winddirection": 360,
"windspeed": 2.9
}
}
$ ops invoke demomcp/weather input=NontExistingCity
{
"output": "Could not find location: NontExistingCity"
}
Your MCP server is now up and running, and you can test it using the graphical inspector with the following command:
$ ops mcp inspect demomcp
The Inspector connects to your MCP server, lists available tools and resources, and allows you to test their behavior interactively.
Your MCP server is now ready to be integrated into any chat interface that supports MCP servers.
In this example, we use 5ire, a free AI assistant and MCP client that provides an excellent environment for running and testing MCP tools.
ops
CLIFirst, install the ops
CLI. You can find installation instructions on the OpenServerless installation page.
Install the MCP plugin using:
$ ops -plugin https://github.com/mastrogpt/olaris-mcp
Use the following command to authenticate:
$ ops ide login
Deploy your toolset to 5ire with:
$ ops mcp install demomcp --5ire
You’re all set! Now you can access your 5ire client and use the deployed MCP server in real conversations.
Let’s walk through how the tool works in practice:
demomcp
) and enable it.With Apache OpenServerless, we showed how to build and deploy a serverless MCP server in minutes, bypassing all complex system configuration.
This example covered only local MCP server configuration. However, the optimal solution utilizes public MCP servers, enabling inter-server communication via agent interaction protocols.
This is just the beginning. Public MCP servers open the door to multi-agent interactions, federation, and more.
Stay tuned for more updates from Apache OpenServerless!
If you have never heard of it, you may wonder: what is Apache OpenServerless? The short answer is: a portable, self-contained and complete cloud-native serverless platform, built on top of Kubernetes and especially suitable to develop production-ready AI applications with minimal effort. Because of its portability and availability in every environment, including air-gapped ones, it shines when you have strong privacy and security constraints and need to build Private AI applications.
OpenServerless embraces the functional programming paradigm, enabling developers to build modular, stateless functions ideal for scalable AI workloads: this model aligns naturally with serverless architecture and simplifies the integration of both public and private LLMs - developers can invoke proprietary APIs like OpenAI or deploy and run private models locally, ensuring full control over sensitive data. A key strength is its ability to run GPU-accelerated runtimes, allowing execution of code directly on GPUs for high-performance inference or training tasks.
The project Apache OpenServerless is closely related to another serverless project: Apache OpenWhisk. OpenWhisk is a portable serverless engine originally developed and open sourced by IBM, and later adopted and further developed by vendors of the calibre of Adobe, Naver, and Digital Ocean.
OpenWhisk is an excellent foundation to provide FaaS services and indeed it is adopted by many cloud providers as their serverless engine. It is also widely used in academia for research on serverless. It is highly scalable and extremely robust and reliable. However, OpenWhisk is not yet widely used because in itself is not a full platform: it is only a FaaS service and, while it is used by cloud providers, their users have little interest in making it available for wider use.
A team of contributors to OpenWhisk, working initially with the startup Nimbella (acquired by Digital Ocean), and later Nuvolaris, developed it further to make it widely accessible and more useful out-of-the-box, adding all the required components with the goal of making it a complete serverless environment. Indeed in general serverless is useful when it is coupled with storage, cache, database and frontend. Given the popularity of LLM based application development it has been also extended to fully support the development of AI applications.
The project was then donated to the Apache Software Foundation and released as Apache OpenServerless. Note in this text we sometimes omit Apache in the name, but always keep in mind that the full names of the projects are respectively Apache OpenWhisk and Apache OpenServerless, as they are both projects copyrighted by the Apache Software Foundation.
To clarify the difference between OpenWhisk and OpenServerless you can think in this way: if OpenWhisk were Linux, then OpenServerless would be Ubuntu. In short, it is a distribution of OpenWhisk providing a Kubernetes operator to install and manage it, a rich CLI with integrated installation and development tools and a collection of starters to build AI applications.
You can see what is in openserverless in the picture below:
As you can note at the core there is OpenWhisk, providing the scalable FaaS service, composed of a set of controllers accepting requests and queuing them in Kafka, and a set of invokers serving the requests on demand, instantiating runtimes. OpenServerless also adds a Kubernetes Operator that manages all the systems. The main purpose of the operator is to deploy OpenWhisk, but also the integrated services. At the moment there is Redis (in the open ValKey flavour), Postgresql (SQL database) and the MongoDB compatible adapter FerretDB (NoSQL), the Vector Database Milvus and an S3 object storage services. We currently support both Minio and Ceph as backends.
Also we have a special service, called streamer, designed to support SSE (server side events) commonly used with AI applications to stream answers from LLM.
The operator is actually pretty powerful as it is configurable, and allows for the creation and management of resources as it is able to create databases, buckets and redis prefixes in the environment it manages, and manage the secrets to access them.
OpenWhisk has a large set of runtimes but instead of supporting all of them, we focused and optimized the more used languages, typically Python, Javascript and PHP, and provided a rich set of libraries in order to use the integrated services.
The operator is controlled by a rich CLI, called ops. The name is a pun, a short of OPenServerless, but also Operation… and also what you say (“OoooPS!”) when you make a mistake. The CLI completes the picture as it is extremely powerful and even expandable with plugins. It manages the serverless resources as in OpenWhisk, but also includes the ability to install OpenServerless in multiple cloud providers and integrates powerful development tools. We will discuss it more in detail later.
Let’s start from the installation: you install OpenWhisk with a helm chart on a set of well known Kubernetes clusters, like Amazon EKS, IBM IKS or OpenShift v4. You need a Kubernetes cluster, that should also be configured properly. Also the installer only installs the engine and no other services.
OpenServerless CLI is more complete. It installs OpenWhisk by deploying the operator in a Kubernetes cluster and sending a configuration. But it is also able to create a suitable cluster.
Indeed the documentation explains how to prepare a Kubernetes cluster on Amazon AWS, Microsoft Azure and Google GCP using the cli called ops: there is an interactive configuration, then ops builds a suitable cluster with all the parameters in place to install OpenServerless in it.
When installing OpenServerless, you can also select which services you want to enable, and many configuration parameters that are essential. All of this just using the ops CLI to set the configuration parameters before performing the installation.
After the installation, the CLI is useful to administer the cluster, adding new users, etc. Note that each user has a complete set of services included, so you do not only create an area (called namespace) for serverless functions but also a SQL database (and a No-SQL adapter), a Vector Database, a bucket for web content (public) and another for private data, a Redis prefix (to isolate your keys in Redis).
Note that the system supports a public area for web content using a dns configuration. You need a DNS domain for an OpenServerless installation, and you usually need to point the root of the domain (@) and a wildcard (*) to a load balancer accessing it. Each user will have a different web area to upload their web content, and a mapping to their serverless functions (’/api/my’) suitable for deploying SPA applications with serverless backend support.
So far so good, but work would not be complete without suitable development tools. You can deploy each function easily but it is a bit painful to have to deploy each function separately. Furthermore, you have to provide each function with options to change the runtime type, the memory constraints, timeouts etc. OpenWhisk supports a manifest format to do that, but does not offer other facilities for deployment.
It is still possible to use the manifest, but we also added a configuration system based on conventions: just put your code in directories and the system will automatically build and deploy.
Also, in this case, the super powers of cli ops come to our rescue! The ops development tools allow us to incrementally publish all the functions we have written, to manage their dependencies, annotations during publication; as well as publish the web part of our application. Furthermore it is possible to integrate the build scripts of our Angular, React, or Svelte application so as to be invoked during the publication process. Other useful tools allow us to handle and interact with the integrated services (Postgresql, Minio, Redis).
All of this looks interesting, but it is actually just the starting point for building AI applications, as this is our main focus. OpenServerless lays the groundwork by providing a flexible, event-driven foundation, but its real power emerges when applied to AI-centric workflows.
Our primary goal is to enable developers and data scientists to move beyond basic automation and toward complex AI systems that integrate reasoning, natural language understanding, and data processing. OpenServerless becomes a powerful platform for rapid experimentation, secure deployment, and scalable AI services. From RAG pipelines to autonomous agents, this environment is designed to evolve with the needs of modern AI, turning abstract ideas into production-ready solutions without the usual overhead of managing infrastructure or sacrificing control.
Apache OpenServerless is an innovative project from the Apache Incubator, designed to deliver a versatile and scalable serverless environment compatible with any cloud provider or Kubernetes distribution. Built upon the robust Apache OpenWhisk framework, it aims to empower developers to create applications of any complexity, from simple forms to advanced AI-driven solutions.
Currently in its preview phase, Apache OpenServerless invites the community to provide feedback and contributions, accelerating its journey towards a stable release.
Apache OpenServerless integrates three core components that collectively form a complete serverless ecosystem:
At the heart of Apache OpenServerless lies Apache OpenWhisk, a distributed and scalable open-source platform for executing serverless functions. OpenWhisk enables dynamic execution of lightweight code snippets, or “Actions,” written in multiple programming languages. These Actions respond to events (via triggers) or HTTP requests, seamlessly adapting to various workloads.
To support various application needs, Apache OpenServerless provides a set of pre-configured services, including:
To streamline development, Apache OpenServerless offers:
OpenServerless is trying to set new standards in the Function as a Service (FaaS) landscape. Built on Kubernetes and Apache OpenWhisk, OpenServerless offers an infrastructure-agnostic approach that can be installed on-premises or in public clouds. This flexibility ensures that data remains under the control of the organization, addressing key privacy concerns for industries with strict data governance requirements. By providing companies with the option to run their applications on their terms, OpenServerless aligns with the demand for transparent and secure data handling in a serverless environment.
OpenServerless also benefits from the robust, scalable nature of Kubernetes, making it ideal for handling asynchronous, event-driven workloads that are core to modern serverless applications. With OpenWhisk’s powerful action-based execution model, it provides a straightforward framework for developers to deploy and manage functions seamlessly. The addition of observability tools allows developers to monitor performance, making it easy to optimize and troubleshoot.
In essence, OpenServerless is more than just a FaaS—it’s a privacy-focused, Kubernetes-native solution that empowers companies to innovate securely and effectively. With its unique approach to data control and its ability to support intensive workloads, OpenServerless is reshaping the serverless ecosystem and opening up exciting new applications for AI-powered, cloud-native solutions.
OpenServerless is a complex project. It has a lot of moving parts and relies heavily on Kubernetes.
There are many subprojects, and each subproject has a complex set of dependencies. Setting up all those dependencies is usually complex and time consuming.
I have worked in a project where it used to take literally a couple of days to get everything ready to code. Also, you were never sure that everything was set up correctly because the dependencies were constantly changing.
For this reason, we have made a special effort to provide an easy and consistent way to have a standardized development environment for OpenServerless.
We considered a few options for setting up the development environment.
The first option is of course a setup script, but since you may be working on Linux, Windows or Mac, this approach turns out to be difficult and fragile.
The second option is to use Docker, and indeed for a while we used a Docker image in DevContainer format as our development environment. We also set up a Kubernetes development cluster using Kind, that is, “Kubernetes-in-Docker”.
However, this approach proved to be slow and with a number of problems related to Docker. So we gradually moved to using a full virtual machine. And this is the approach we are taking with OpenServerless.
The development environment is a virtual machine initialized with a cloud-init script. Cloud-init is a standard for initializing a virtual machine in the cloud.
Using this cloud-init script you can actually run a developmnt environment basically in any cloud provider, if you want a shared one.
Or, if you want to use your local machine, assuming you have at least 16GB of memory, you can launch the VM and initialize it with Cloud-Init in Linux, Windows and Mac using multipass.
THe README of Apache OpenServerless is indeed entirely devoted to setup the development virtual machine with Multipass and Cloud-Init.
Using this cloud-init script, you can actually run a development environment in basically any cloud provider if you want a shared one.
Or if you want to use your local machine, assuming you have at least 16GB of memory, you can start the VM and initialize it with cloud-init in Linux, Windows and Mac using multipass.
The README for [Apache OpenServerless] (https://github.com/apache/openserverless) is actually entirely devoted to setting up the development virtual machine with Multipass and Cloud-Init.
The development machine is actually packed with goodies. For a start, it includes Kubernetes in the form of K3S, a lightweight but full-featured version of Kubernetes. Well, technically, K3S is an API-compatible, work-alike sister re-implementation of Kubernetes, but for all practical purposes, it IS Kubernetes.
But we need more than Kubernetes. We have a number of subprojects, and for each one, there’s a different set of tools and programming languages that need to be set up. We used to have a script to setup these dependencies, but since it turned out to be tedious to update, we switched to using [the package manager Nix] (https://nixos.org/download/). This is a tool that allows you to set up development environments (actually any environment) declaratively by writing a script shell.nix
, the Nix language that defines the development environment. The virtual machines also include nix, and also a tool called direnv
to automatically configure nix, calling a different shell.nix
every time you change a directory.
Last but not least, we use VSCode as it provides remote development features and allows you to work in the virtual machine as if it were a local folder. Instructions for setting up VSCode to use the virtual machine are provided in the README.
It is also worth mentioning that since we use task a build tool everywhere, we included it in the VM. There is also a license manager license-eye to ensure that all files are properly licensed under the Apache license.
The Apache OpenServerless project’s goal is to build a serverless distribution that runs in all major flavors of Kubernetes in public and private clouds, and in any virtual machine running Linux in any cloud. It is not just a serverless engine, but a complete set of integrated tools to easily build cloud-native applications, with a focus on building AI applications.
Specifically, we are building on top of Apache OpenWhisk, which includes Apache Kafka and Apache CouchDB as components, adding Apache APISix as an API gateway, and a set of custom runtimes.
We have a Kubernetes operator to manage all the components, and a rich CLI to support installation and development.
We have a strong focus on development tools: the system includes support for developing full-stack applications in web-based IDEs using the DevContainer standard, with built-in full-stack hot reload (both backend and front-end).
We will have a set of starters that support the development of AI applications based on LLM. Furthermore, since many AI applications are basically a coreography of functions, something well supported in the serverless world, we will have a workflow generator to easily develop applications.
The project is already running at the Apache Software Foundation and we are in the process of migrating the contributed code base.
Our home is https://github.com/apache/openserverless
Join us by subscribing to our mailing list sending an email to
dev-subscribe@openserverless.apache.org
We are in the process of submitting thr open source codebase of Nuvolaris Community, to the Apache Software Foundation as an Apache project, and our proposed name is Apache OpenServerless.
The name is excellent because it conveys what the project is: a complete serverless environment for running cloud-native applications anywhere.We have already written the proposal and found our champions and mentors.
But before we voted the project in the Incubator PMC, we wanted to make sure that the chosen name was available.There was some concern because the name “serverless” is already trademarked, although we found that combinations using the name “serverless” are disjointly trademarked, so it should work.
To resolve this, we initiated research with the Apache Trademark team to be sure the nome was usable. The research took some time, but the result was: approved!So now we are ready to vote and get the process approved by the community and start building the next standard in the open source world: Apache OpenServerless!
It is official! Apache OpenServerless is now an incubating project at the Apache Software Foundation! The result of the vote was positive and this is the email announcing the result of the vote on the Incubator Mailing List.
From: Jean-Baptiste Onofré
Date: Tuesday 18 June 2024 16:20:06 BST
Subject: [RESULT][VOTE] Accept OpenServerless into the ASF incubator
Hi folks,
this vote passed with the following result:
+1 (binding): Francis Chuang, PJ Fanning, Yu Xiao, Duo Zhang,
Bertrand Delacretaz, Zhongyi Tan, Zhang Yonglun, Charles Zhang,
Enrico Olivelli, Dave Fisher, François Papon,
Roman Shaposhnik, Yu Li, Calvin Kirs
+1 (non binding): ZhangJian He, Nicolò Boschi, likeho
Thanks all for your vote !
Regards JB
Hello everyone, we are happy to announce that we submitted the OpenServerless project to the Apache Software Foundation. We are going to develop our Nuvolaris Community into a worldwide open source project at the highest level.
The goal is to provide the open source foundation of our Nuvolaris Enterprise product as a vendor independent and stable project maintained by a community.
To achieve this goal, we have submitted the Apache OpenServerless proposal is the natual step. The link to the proposal can be found here.
Our codebase is well tested and already has a number of paying and open source customers. We already have a network of contributors who have already contributed to the codebase and we have found the mentors for our project and the champion for the project.
But what is the Nuvolaris community (to become Apache OpenServerless)? There is already an open source serverless engine (Apache OpenWhisk) and I am one of the PMC of the project and also wrote an O’Reilly book about it: Learning Apache OpenWhisk.
What is missing now is a complete distribution including integrated services to build a complete platform. We want the Apache OpenServerless project to fill this gap.
With Nuvolaris Community we provide storage, databases, caches, frontend, IDE, starters and even LLM support on top of OpenWhisk. We have made this available and running on all major cloud provider Kubernetes platforms (EKS, AKS, GKE, LKE) and also for the Kubernetes of all major Linux distributions (RedHat OpenShift, Ubuntu MicroK8S, SuSE K3S).
Simply put, if OpenWhisk is Linux, then Nuvolaris is RedHat. The OpenServerless project aims to be the first complete open source distribution that makes it easy to build cloud-native applications with portability in mind.
And we want to build the platform in the open, contributing our work to the Apache Software Foundation to make it widely available and get more vendors involved in supporting it.