In my previous article, Master the Art of Command Line: Your Ultimate Guide to Developing Powerful Tools, we explored how to build a CLI tool from the ground up. Now, let’s take the next step — how do you make it good? Ultimately, the key lies in ease of use. A well-designed CLI feels intuitive, predictable, and efficient for the user. However, this ease of use is achieved through several distinct design principles, which we’ll explore in this article.
For demonstration, we’ll use the Click Python library, but the principles apply to almost any CLI framework you might use.
Parameters
Most CLI tools accept input from the user. In most cases, this input isn’t interactive — it comes in the form of parameters that define how the tool behaves.
For example, the cp command takes at least two parameters:
- The path to the file or directory to copy, and
- The path to the destination.
It can also accept optional parameters like -v, which enables verbose output.
Broadly, there are two types of parameters: required and optional. In most CLI libraries, required parameters are called arguments (or positional arguments), and optional parameters are called options.
Positional arguments
As the name suggests, positional arguments are determined by their position. The user provides values in a specific order, and the tool interprets them accordingly.
For instance, if the cp command expects two positional arguments — source and destination — the user would call it as:
cp source destination
Even if your tool requires input, it’s often cumbersome (and unnecessary) to make users specify every value manually. A good rule of thumb: if you can provide a sensible default, make it an option instead of an argument.
For example, a --log-level parameter could default to info. Forcing users to remember the position and always pass info doesn’t add value — it just adds friction.
Multiple Arguments
Sometimes a tool may need to accept multiple values for a single argument. A common example is specifying multiple source files to copy into a single destination directory. This allows the user to run one command instead of several:
cp source1 source2 destination
There are some caveats — typically, only one argument can accept multiple values. Your code (not the argument parser) must handle additional validation and logic.
Options
Options let users specify parameters by name, often in the form --option value. If the user doesn’t provide an option, the tool uses its default value.
For example:
--log-level debug
A special kind of option is a flag — it doesn’t take a value. The mere presence of the flag toggles a behavior, such as enabling verbose output:
--verbose
Option names should be clear and descriptive, so users can easily understand their purpose.
Example implementation:
@click.command()
@click.option('--verbose', is_flag=True)
@click.option('--log-level', default="info")
@click.argument('srcs', nargs=-1)
@click.argument('dst')
def copy(verbose: bool, log_level: str, srcs: tuple[str, ...], dst: str):
...
Now, the tool can be called in several ways:
cp --verbose --log-level trace source1 source2 destination
cp source destination
Short options
So far, we’ve used long option names. These are ideal for scripts, since they’re self-explanatory and easy to understand at a glance.
However, for quick, one-time commands, users often prefer short options — like -v instead of --verbose. Be selective when introducing short forms. Not every option needs one. If your tool has many options, you’ll quickly run out of intuitive letters — especially since multiple options might start with the same one. It’s usually enough to define short versions only for frequently used options.
Example:
@click.command()
@click.option('-v', '--verbose', is_flag=True)
@click.option('--log-level', default="info")
@click.argument('srcs', nargs=-1)
@click.argument('dst')
def copy(verbose: bool, log_level: str, srcs: tuple[str, ...], dst: str):
...
Now both forms work:
cp -v source destination
cp --verbose source destination
Validate user input and fail fast
A robust CLI tool should validate user input early and fail with a clear, actionable message. Most CLI frameworks support this pattern.
For example, Click can enforce valid log levels and path arguments automatically:
@click.command()
@click.option('-v', '--verbose', is_flag=True)
@click.option('--log-level', default="info", type=click.Choice(["debug", "info", "error"]))
@click.argument('srcs', nargs=-1, type=click.Path())
@click.argument('dst', type=click.Path())
def copy(verbose: bool, log_level: str, srcs: tuple[str, ...], dst: str):
...
However, you may still need custom logic. In our copy example:
- If there’s one source, the destination can be a file or directory.
- If there are multiple sources, the destination must be a directory — otherwise files might overwrite each other unexpectedly.
Generic CLI libraries won’t handle such cases automatically, so this validation must live in your code.
Help message
A good tool always provides a good help message.
Users should be able to run something like --help to see:
- What parameters are available
- Their meaning
- Accepted types or values (e.g., valid log levels)
A clear help message saves time, reduces errors, and improves user trust in your tool.
Follow your ecosystem patterns
Finally, great CLI tools feel native to their ecosystem. Study other tools in your environment and follow their conventions — especially for naming options and arguments.
For example, in the Unix world, tools like cp, mv, and rsync all use source and destination as positional arguments, with source always preceding destination.
Consistent naming and familiar option letters (-v for verbose, -r for recursive, -n for dry run, -h/--help for help) make your tool blend seamlessly into the user’s workflow.
When your tool feels like a natural part of the ecosystem, users will be far more likely to adopt it — and enjoy using it.
Other aspects
Beyond parameters and arguments, there are several other factors that distinguish a good CLI tool from a merely functional one. These aspects affect how your tool integrates with scripts, how users debug issues, and how easily it fits into larger workflows.
Exit codes
Every program returns an exit code when it finishes execution. This code lets the caller know whether the program succeeded or failed.
By convention:
- 0 means success
- Any non-zero value indicates an error
Although there’s no universal standard for exit codes, it’s good practice to at least exit with a non-zero code on any error.
For better consistency, consider following the conventions from the Advanced Bash-Scripting Guide. These are widely recognized in the Unix world.
You should also document your exit codes — include them in the --help output or the tool’s manual page. When users or automation systems understand what exit code 3 means, they can handle errors more intelligently (e.g., retry, skip, or alert).
STDOUT vs STDERR
A well-behaved CLI tool distinguishes between its two standard output streams:
- STDOUT — for regular, expected output
- STDERR — for errors, warnings, or diagnostic messages
This separation allows users and scripts to handle each stream independently. For instance, a user can redirect the normal output to a file while keeping error messages visible on the console:
mytool > output.txt
or suppress errors:
mytool 2>/dev/null
If your tool writes both data and errors to STDOUT, users will need to filter output manually — and that’s fragile.
Especially if your tool’s output is consumed by other programs, keeping STDOUT clean and predictable is essential.
Output format
If your tool produces data that might be consumed by other tools or scripts, it’s wise to support machine-readable output formats such as JSON or YAML.
For example:
mytool --output json
This makes automation much easier — other tools can use standard parsers instead of relying on brittle string parsing that breaks whenever spacing, alignment, or punctuation changes.
However, don’t sacrifice human readability. Many users will still run your tool interactively. The best approach is to:
- Default to human-readable output (nicely formatted text)
- Offer a flag (e.g.
--jsonor--formatif you want to support multiple formats) for structured output
Some CLI tools also provide auto-detection, sending pretty-printed text when writing to a terminal but JSON when piped into another process — similar to how kubectl or gh behave.
Error Messages
A good error message should be:
- Clear — state what went wrong
- Actionable — suggest how to fix it
- Consistent — use the same tone and structure throughout
Bad example:
Error: failed
Better example:
Error: Could not open file '/tmp/input.txt' — file not found.
Hint: Verify that the path is correct and you have read permissions.
Good error handling saves users time and helps them build trust in your tool.
Logging and Verbosity
Users appreciate control over how much information they see. A typical approach:
--quietfor minimal output--verbosefor detailed messages--debugfor developer-level tracing
Design your logging levels so users can choose the right amount of information for their context.
When combined with proper exit codes, these levels make debugging and automation much smoother.
Responsiveness and Performance
If your CLI performs heavy operations, consider providing progress indicators or spinners. For example:
Downloading files... 45% complete
Users shouldn’t have to wonder whether the program has hung. At the same time, avoid unnecessary animations or output when the tool is being run in non-interactive mode (e.g. as part of a script).
Conclusion
Building a CLI tool is relatively easy. Making a good one — that’s the real craft. A good command-line tool feels natural, behaves predictably, and quietly disappears into the user’s workflow. It doesn’t force users to adapt to it — it adapts to how users already work.
Throughout this article, we’ve looked at what contributes to that experience:
- Clear and consistent parameters that balance simplicity and flexibility
- Meaningful exit codes that communicate success or failure
- Proper use of STDOUT and STDERR to separate data from diagnostics
- Readable and structured output formats for both humans and machines
- Helpful error messages and predictable behavior
- Adherence to the conventions of your ecosystem, so your tool feels familiar from the first run
Ultimately, the best CLI tools share a common trait: they respect the user’s time. They start fast, fail fast, explain themselves clearly, and integrate seamlessly into larger systems and scripts.
If you keep these principles in mind — clarity, consistency, and empathy for your users — your CLI tool won’t just work; it’ll be a pleasure to use.