This has the potential to be a really intuitive way of utilising MCP servers with Ollama, but it's missing the natural flow of knowledge gathering and execution that most LLM applications strive to have. It would be great if there were settings added that allowed models to make more decisions in-between responses and thus return a more complete output.