Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Agent Tool Definition Standards: How to Write Tool Description Fields So LLMs Can Invoke Them Accurately

Agent stability largely depends on the quality of tool definitions. If a tool is defined vaguely, the model will misuse it; if definitions overlap excessively, the model will hesitate.

This is already clearly documented in public sources. The official Dify Agent documentation directly lists Tool Description, Parameters, and Authorization as core configuration items and explicitly states that the tool description guides the model on when to use the tool. Therefore, this training material is not about “experience preferences” but about explaining a publicly documented product constraint.

1. Agent Tool Definition Principles Confirmed by Public Sources

1. The Tool Description Itself Is a Model Decision Input

The official documentation explicitly states that the tool description is not a note for humans but key context that helps the LLM determine “when to use this tool.”

2. Parameter Design Directly Affects Invocation Success Rate

If parameter semantics are ambiguous, required vs. optional is unclear, or field validation is weak, the Agent may select the correct tool but still pass incorrect parameters.

3. The Most Valuable Training Content Is “Good Definition vs. Bad Definition” Comparisons

This type of content is ideal for demonstrations: for the same API, if the tool description is written clearly, model invocations will be much more stable; if written too broadly, the model tends to misuse or repeatedly probe.

2. What a Tool Description Should Include

  • What it can do
  • What it cannot do
  • What types of problems it applies to
  • What parameters it requires
  • How to handle failures

3. Parameter Design Recommendations

  • Parameter names with clear semantics
  • Required and optional clearly distinguished
  • Strong constraints for date, number, and ID fields

4. Practices to Avoid

  • Overly broad descriptions like “query information”
  • Two tools that both appear to be all-purpose
  • Not documenting failure scenarios

5. Training Focus

Have trainees practice wrapping the same API as both a “good tool definition” and a “bad tool definition,” then compare the differences in model invocation behavior.

Public Source References

note.com

  • Domestic AI Agent Trends (2026/4/1 issue) | https://note.com/yasuhitoo/n/ne72b855e32ad

zenn.dev / Official Documentation / Other Public Pages

  • Agent | https://docs.dify.ai/ja/use-dify/nodes/agent
  • Agent (Legacy Japanese) | https://legacy-docs.dify.ai/ja-jp/guides/workflow/node/agent
  • [AI Agent Jump Start: Advanced #7] Dify | https://zenn.dev/dxclab/articles/ddceffea0903f3

Confirmed Information from Public Sources

  • Tool descriptions directly affect Agent decision-making behavior
  • Parameter design quality directly impacts invocation success rate and stability
  • This topic is highly suited for “good vs. bad comparison demonstrations” in partner training