Skip to content

MCP Notes & Experiments

Jonathan Bloedow edited this page Aug 14, 2025 · 46 revisions

Context

JENNER produces pretty amazing results as a Custom GPT in terms of both content and working Python LASER code. But the interface is browser chat and it would be nice if users could sit there in their IDE and interact with it, without copy-pasting code, and error messages. This points to: JENNER-as-a-Service. Everything is better as a webservice. But OpenAI does not offer a solution right now to turn Custom GPTs into webservices.

Putative Goal

MCPs are a way of webservicifying content-specific LLMs such that LLM clients can query them. What would it take to turn JENNER into an MCP Server? Note that the results need to be just as good as the results we're getting from JENNER.

Links

OpenAI: Build & Use MCP Service using FastMCP

  • It's not at all clear at first that FastMCP is useful for building a server. Mostly client code.
  • Their example server code does run, out-of-the-box. But lots of pain doing basic client/server testing with getting basic handshaking working.
  • Using ChatGPT to write a FastMCP server is basically a waste of time. Even if you give it all the pydocs as an upload text document.

Some official docs on querying existing third party remote servers

  • Not entirely clear how this page fits in taxonomically with the previous one.

OpenAI Official Intro to "Remote MCP"

  • Surprisingly short and meager. Possibly useful links. Haven't tested all sample code yet. Still working through it. (Other content seems much more promising.)

JENNER As Open AI Assistant?

  • Works in the sense that one can get RAG-style answers from the docs, but as for the code produced, initial experiments show very poor results vs JENNER proper (Custom GPT).

Build & Deploy Remote MCP Server (Cloudflare)

  • A bit Cloudflare-specific, as one might imagine, but this page has exactly the teaser title I would want.

Langchain Homebrew

  • Making a RAG that consumes the laser.pdf and answers questions gives massively different results depending on which GPT model you specify. Some models actually result in "the docs don't have any information bout LASER"! This is using langchain code I've been using successfully for years as a basic RAG tool. 4o does produces plausible code. Haven't done any real experiments on non-default vector stores or chunking settings, temperature, etc.

AWS MCP List

  • Seems to show Amazon going all-in on MCP.
  • None of these seem to be hosted by Amazon -- which is sort of fascinating to me, since it's not like they don't have good infrastructure of host servers and services! They seem to be MCP Servers you'd run for yourself.

ChatGPT:

  • "Just build your own hello world MCP Server with ChatGPT" is a surprisingly not-immediate-path-to-success story based on my experiments to date. Many iterations involved.

Official Seeming Page

  • But honestly haven't found it particularly useful yet.

Some code that GPT assumed would be useful and functional:

ChatGPT Connectors

image
  • The only thing that made sense was Teams, but didn't bear any fruit.
  • Asked around if colleagues had tried this: no.

GitHub -> MCP

MCP-Pack Experiment (Katherine)

  • Need to test this out on https://github.com/InstituteforDiseaseModeling/laser.git.
  • KR showed me a demo of this working.
  • It's an example of propping something up locally for a power-dev.
  • There's a LOT going on here, numerous moving parts, lots of dependencies. Not quite sure how it all works together.
  • Tried taking it for a spin:
Analyzing repository: https://github.com/InstituteforDiseaseModeling/laser.git
Request error: 404 Client Error: Not Found for url: https://api.github.com/repos/InstituteforDiseaseModeling/laser.git/contents/. Retrying in 1.6 seconds (attempt 1/5)...
Request error: 404 Client Error: Not Found for url: https://api.github.com/repos/InstituteforDiseaseModeling/laser.git/contents/. Retrying in 2.1 seconds (attempt 2/5)...
Request error: 404 Client Error: Not Found for url: https://api.github.com/repos/InstituteforDiseaseModeling/laser.git/contents/. Retrying in 4.9 seconds (attempt 3/5)...
Request error: 404 Client Error: Not Found for url: https://api.github.com/repos/InstituteforDiseaseModeling/laser.git/contents/. Retrying in 8.9 seconds (attempt 4/5)...
Request error: 404 Client Error: Not Found for url: https://api.github.com/repos/InstituteforDiseaseModeling/laser.git/contents/. Retrying in 16.7 seconds (attempt 5/5)...
  • Think I need more explanation. Meikang says she got some version of this working for someone.

At this point in time, it seems to has occurred to a few people that kickstarting your own GitHub repo into an MCP Server is going to be a Thing People Want To Do, so it would be useful to provide a super-simple turnkey solution for it. There are two technologies I found claiming to do just that...

gitmcp

Context7

Personal Observations

Almost every MCP experiment I've tried from the above links has been mostly a dead end. I haven't before tried using a new technology that ended up being useful which had this much dead-endery associated with my initial tryouts.

It seems like MCP Servers can be propped up in 3 ways. (Well, all servers can be propped up like this...)

  • Locally and Personally: This seems surprisingly common as "the way" to run an MCP server, even off-the-shelf ones. Often out of a docker container (running -d). This does NOT seem like a highly scalable solution or the Average User. Wouldn't it be easier to install a plugin?
  • Organizationally Hosted: Your org can run your or a 3rd Party MCP server and make the endpoint well known.
  • Third Party Hosted: The org who built the MCP server can actually host it and users can just point to that. This is what I assumed was the norm, but apparently it's not. At least not yet.

To that point, here's an article (and video?) about how GitHub is moving away from "just run this MCP server as a local docker daemon" to "just point to our hosted endpoint":

Enabling MCP Servers with Copilot in VSCode (clorton)

No can do - disallowed by organization management.

Screenshot 2025-08-14 at 10 28 38 AM markup

Open image in a new tab for details.

Foundation Documentation

On this day I learned that we have internal docs about this whole topic: https://gatesfoundation.atlassian.net/wiki/spaces/SRE/pages/4076306456/A+Builder+s+Guide+to+MCP+Servers. I have requested access.

A Long-Awaited Success Story

tl;dr I was able to build a seemingly useful LASER MCP Server.

Yesterday afternoon I managed to get a functioning MCP Server which mimics JENNER into a working prototype phase.

  • It's built around FastMCP.
  • FastMCP is crap software and profoundly frustrating to get working. GPT is nearly useless for this.
  • I ended up downloading the FastMCP documentation text file and creating a Custom GPT based on that file. This finally enabled me to create a working server with a couple of routes.
  • I then combined FastMCP/FastAPI with langchain to create a RAG server that was MCP compliant.
  • I then ran this locally and tested with curl. This never worked. You should always be able to test a server with curl but eventually GPT 4o concluded with me that FastMCP was simply buggy -- related to claiming I wasn't sending session IDs when I was.
  • I then resorted to a FastMCP Client as a minimal test client instead of curl and fortunately this worked. Yay.
  • I added a search route and a generate_code route. The former is for answering human-ish questions and the latter for generating large blocks or scripts of working LASER code.
  • I then dockerized my service and eventually got that working. I pushed my docker image to docker.io (should use Artifactory).
  • I then hosted this on my own personal free Render account.
  • I used my own personal OPENAI_API_KEY. Didn't see any other options.
  • Running my mcp test client against the Render-hosted LASER MCP Server was successful. Yay.
  • Not being a big IDE guy, I asked a colleague to point his AI-enabled IDE to the server. See notes above.
  • github repo: https://github.com/InstituteforDiseaseModeling/laser-mcp
  • Prototype endpoint is here: https://laser-mcp.onrender.com/mcp

Clone this wiki locally