LLMs for the Old and Infirm

A few people have asked me, an old man, how I manage to use LLMs in my life without being driven insane by their horrid new-fangledness, their hallucinations, their wanton sycophancy, the hype, the grift, and the everpresent risk of being lured into psychosis by these sweet-talking daemons that haunt our modern, fallen, age. The simple answer is that, as a command-line fogey, I use Simon Willison's excellent llm program in the terminal, and trap the poor things in confines of being just another unix utility in my toolkit, along with sed, pandoc, and the rest.

Below is a list of examples of how I use llm, plucked from a random day. I generated the list by running:

llm logs -t -n50 | llm "These are log files from Simon Willison's
    llm program. I'd like to show my friends the kind of thing I use it for. \
    Can you take these, and categorise them broadly by category, with each one \
    a short description, phrased as though it were the question being \
    asked by the initial prompt, linking to a page showing that prompt and its \
    results -- as if i ran exported a datasette output on the log sqlite with url structure \
    https://danny.spesh.com/ai/datasette/llm/conversations/.html \
    . Highlight in bold any that seem to be a particularly good demonstration of \
    the capabilities of LLMs, as opposed to a simple alternative to other tools \
    like Google search or a calculator. If any of them seem personal or \
    private, put them in a separate category at the bottom marked private. All \
    of this should be output as markdown, easily convertable into HTML" > result.md

I converted that result.md into this, using pandoc -i result.md -o index.html

To make the linked pages, which I anticipated should contain rough transcripts of the results of those llm commands, I asked llm to write me a program to generate them:

files-to-prompt result.md| llm -T 'SQLite( \
    "/Users/danny/Library/Application Support/io.datasette.llm/logs.db" )' \
    -x "Look at the logs db and see if you can write a script that will generate \
    viewable html at the right URLs for the conversations and links listed in result.md" > generate.py

files-to-prompt is a simple program that concatenates a file(s)' contents with its name -- a great way to slam a lot of files into a prompt with sufficient context. The -T SQLite bit gives my llm model of choice (this is all being run on Anthropic's Claude by default, but I could switch it to OpenAI, or a local LLM very easily) read-only access to a local sqlite file, here giving access to the llm command's own logs. Very recursive. LLMs know enough SQL to be dangerous, so it can find out schemas, and explore the contents by itself. The -x restricts llm's output to just the bit of the LLM's answer that is surrounded by ```-style markdown code prompts, a very effective way to just get the source code, without any of the tedious explanation that might accompany it.

That produced (with a few very minor tweaks by me) this Python program. And the nice HTMLification of that Python program came via this command:

llm "can you create an html template that i could paste the source
    to a python script into, and it would be syntax-highlighted correctly and
    look pretty? You can pull in external js resources available on
    cdns" > template.html 

As you can see, I'm still fairly heavily stuck in the 1990s, Unix and hand-crafted HTML and all. But now I have a happy Sirius Cybernetics buddy from the future to help me. Share and enjoy!

PS Here's Simon's far better guide to using llm.

My LLM Usage Log - Categorized Summary

Shell Scripting & Development

System Administration & DevOps

Git & Version Control

Filecoin and Ethereum Technical Support

Technical Troubleshooting & Debugging

Writing & Language Questions

Translation Services

Terminal & Display Tools

Time Zone & Calculations

Meta-Analysis & Documentation

Research & Analysis