Skip to content

Conversation

iceljc
Copy link
Collaborator

@iceljc iceljc commented Sep 10, 2025

PR Type

Enhancement


Description

  • Add reasoning effort level configuration for chart plotting

  • Extract LLM configuration into dedicated method

  • Update settings with new reasoning effort parameter

  • Enhance chart initialization requirements in template


Diagram Walkthrough

flowchart LR
  A["Settings"] --> B["GetLlmConfig Method"]
  B --> C["AgentLlmConfig"]
  C --> D["Chart Plot Agent"]
  E["State Service"] --> B
Loading

File Walkthrough

Relevant files
Enhancement
PlotChartFn.cs
Extract LLM config with reasoning effort                                 

src/Plugins/BotSharp.Plugin.ChartHandler/Functions/PlotChartFn.cs

  • Extract LLM configuration creation into GetLlmConfig() method
  • Add reasoning effort level support with state service integration
  • Replace inline config creation with method call
+17/-4   
Configuration changes
ChartHandlerSettings.cs
Add reasoning effort level setting                                             

src/Plugins/BotSharp.Plugin.ChartHandler/Settings/ChartHandlerSettings.cs

  • Add ReasoningEffortLevel property to ChartPlotSetting class
+1/-0     
appsettings.json
Configure chart plotting parameters                                           

src/WebStarter/appsettings.json

  • Add MaxOutputTokens and ReasoningEffortLevel to ChartHandler
    configuration
  • Set default values for new chart plotting parameters
+3/-1     
Documentation
util-chart-plot_instruction.liquid
Emphasize chart initialization requirement                             

src/Plugins/BotSharp.Plugin.ChartHandler/data/agents/6745151e-6d46-4a02-8de4-1c4f21c7da95/templates/util-chart-plot_instruction.liquid

  • Strengthen chart initialization requirement with "must" emphasis
+1/-1     

Copy link

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Type Safety

The reasoning effort level is handled as a free-form string from settings/state without validation. Consider constraining to known values or normalizing to avoid invalid config reaching downstream LLM calls.

var maxOutputTokens = _settings?.ChartPlot?.MaxOutputTokens ?? 8192;
var reasoningEffortLevel = _settings?.ChartPlot?.ReasoningEffortLevel ?? "minimal";

var state = _services.GetRequiredService<IConversationStateService>();
maxOutputTokens = int.TryParse(state.GetState("chart_plot_max_output_tokens"), out var tokens) ? tokens : maxOutputTokens;
reasoningEffortLevel = state.GetState("chart_plot_reasoning_effort_level").IfNullOrEmptyAs(reasoningEffortLevel);

return new AgentLlmConfig
{
    MaxOutputTokens = maxOutputTokens,
    ReasoningEffortLevel = reasoningEffortLevel
};
State Parsing

Parsing max output tokens from conversation state may fail silently if state contains non-numeric values; consider logging or clamping to safe bounds to prevent extreme values.

var state = _services.GetRequiredService<IConversationStateService>();
maxOutputTokens = int.TryParse(state.GetState("chart_plot_max_output_tokens"), out var tokens) ? tokens : maxOutputTokens;
reasoningEffortLevel = state.GetState("chart_plot_reasoning_effort_level").IfNullOrEmptyAs(reasoningEffortLevel);

return new AgentLlmConfig
{
    MaxOutputTokens = maxOutputTokens,
    ReasoningEffortLevel = reasoningEffortLevel
Template Consistency

The strengthened initialization rule uses bold emphasis but differs from previous phrasing location; verify formatting renders correctly in the consumer and that duplication with existing guidance is avoided.

** You must initialize the chart with explicit non-zero width (at least 800px) and non-zero height (at least 500px).

Copy link

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Validate and cap LLM overrides

The suggestion recommends adding validation and capping for maxOutputTokens and
reasoningEffortLevel retrieved from the conversation state. This is to prevent
API errors and unexpected costs from invalid or unsupported values, centralizing
these checks within the GetLlmConfig method.

Examples:

src/Plugins/BotSharp.Plugin.ChartHandler/Functions/PlotChartFn.cs [148-162]
    private AgentLlmConfig GetLlmConfig()
    {
        var maxOutputTokens = _settings?.ChartPlot?.MaxOutputTokens ?? 8192;
        var reasoningEffortLevel = _settings?.ChartPlot?.ReasoningEffortLevel ?? "minimal";

        var state = _services.GetRequiredService<IConversationStateService>();
        maxOutputTokens = int.TryParse(state.GetState("chart_plot_max_output_tokens"), out var tokens) ? tokens : maxOutputTokens;
        reasoningEffortLevel = state.GetState("chart_plot_reasoning_effort_level").IfNullOrEmptyAs(reasoningEffortLevel);

        return new AgentLlmConfig

 ... (clipped 5 lines)

Solution Walkthrough:

Before:

private AgentLlmConfig GetLlmConfig()
{
    var maxOutputTokens = _settings?.ChartPlot?.MaxOutputTokens ?? 8192;
    var reasoningEffortLevel = _settings?.ChartPlot?.ReasoningEffortLevel ?? "minimal";

    var state = _services.GetRequiredService<IConversationStateService>();
    // No validation or capping on tokens from state
    maxOutputTokens = int.TryParse(state.GetState("chart_plot_max_output_tokens"), out var tokens) ? tokens : maxOutputTokens;
    // No validation on reasoningEffortLevel string from state
    reasoningEffortLevel = state.GetState("chart_plot_reasoning_effort_level").IfNullOrEmptyAs(reasoningEffortLevel);

    return new AgentLlmConfig
    {
        MaxOutputTokens = maxOutputTokens,
        ReasoningEffortLevel = reasoningEffortLevel
    };
}

After:

private AgentLlmConfig GetLlmConfig()
{
    var settingsMaxTokens = _settings?.ChartPlot?.MaxOutputTokens ?? 8192;
    var settingsReasoningEffort = _settings?.ChartPlot?.ReasoningEffortLevel ?? "minimal";

    var state = _services.GetRequiredService<IConversationStateService>();
    
    // Validate and cap tokens from state
    var maxOutputTokens = settingsMaxTokens;
    if (int.TryParse(state.GetState("chart_plot_max_output_tokens"), out var stateTokens) && stateTokens > 0)
    {
        maxOutputTokens = Math.Min(stateTokens, settingsMaxTokens); // Cap at a safe limit
    }

    // Validate reasoning effort from state
    var reasoningEffortLevel = state.GetState("chart_plot_reasoning_effort_level");
    if (!IsValidReasoningLevel(reasoningEffortLevel)) {
        reasoningEffortLevel = settingsReasoningEffort;
    }

    return new AgentLlmConfig { ... };
}
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a significant robustness issue where unvalidated maxOutputTokens and reasoningEffortLevel from the state service can cause API errors or unexpected costs, and the proposed validation is crucial.

Medium
Possible issue
Validate reasoning effort values

Normalize the reasoning effort to valid provider-supported values and use a safe
default. An invalid value (e.g., "minimal") can cause the LLM call to fail.

src/Plugins/BotSharp.Plugin.ChartHandler/Functions/PlotChartFn.cs [151-161]

-var reasoningEffortLevel = _settings?.ChartPlot?.ReasoningEffortLevel ?? "minimal";
+var reasoningEffortLevel = _settings?.ChartPlot?.ReasoningEffortLevel ?? "low";
 
 var state = _services.GetRequiredService<IConversationStateService>();
 maxOutputTokens = int.TryParse(state.GetState("chart_plot_max_output_tokens"), out var tokens) ? tokens : maxOutputTokens;
 reasoningEffortLevel = state.GetState("chart_plot_reasoning_effort_level").IfNullOrEmptyAs(reasoningEffortLevel);
+
+// normalize reasoning effort to allowed values
+reasoningEffortLevel = (reasoningEffortLevel ?? "").Trim().ToLowerInvariant();
+if (reasoningEffortLevel != "low" && reasoningEffortLevel != "medium" && reasoningEffortLevel != "high")
+{
+    reasoningEffortLevel = "low";
+}
 
 return new AgentLlmConfig
 {
     MaxOutputTokens = maxOutputTokens,
     ReasoningEffortLevel = reasoningEffortLevel
 };
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that "minimal" may be an invalid value for ReasoningEffortLevel and proposes robust validation to prevent potential runtime errors with the LLM provider.

Medium
Fix invalid config default

Use a valid reasoning effort default to prevent runtime API errors. Replace
"minimal" with a supported value like "low".

src/WebStarter/appsettings.json [324-325]

 "MaxOutputTokens": 8192,
-"ReasoningEffortLevel": "minimal"
+"ReasoningEffortLevel": "low"
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that "minimal" is likely an invalid default value for ReasoningEffortLevel in the configuration, and changing it to "low" prevents potential runtime errors.

Medium
Validate max token limits

Guard against zero/negative token limits from settings or state. Non-positive
token values will cause provider errors or empty responses.

src/Plugins/BotSharp.Plugin.ChartHandler/Functions/PlotChartFn.cs [150-154]

-var maxOutputTokens = _settings?.ChartPlot?.MaxOutputTokens ?? 8192;
+var defaultMaxOutputTokens = _settings?.ChartPlot?.MaxOutputTokens ?? 8192;
+if (defaultMaxOutputTokens <= 0) defaultMaxOutputTokens = 8192;
+var maxOutputTokens = defaultMaxOutputTokens;
 var reasoningEffortLevel = _settings?.ChartPlot?.ReasoningEffortLevel ?? "minimal";
 
 var state = _services.GetRequiredService<IConversationStateService>();
-maxOutputTokens = int.TryParse(state.GetState("chart_plot_max_output_tokens"), out var tokens) ? tokens : maxOutputTokens;
+if (int.TryParse(state.GetState("chart_plot_max_output_tokens"), out var tokens) && tokens > 0)
+{
+    maxOutputTokens = tokens;
+}
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly points out that a non-positive maxOutputTokens value from settings or state can cause runtime errors, and proposes adding validation to prevent this.

Medium
  • More

@iceljc iceljc merged commit dbac626 into SciSharp:master Sep 10, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant