Large Language Models (LLMs) are arguably the most consequential technological innovation of this century. Dreams of AGI (Artificial General Intelligence), once limited to sci-fi, are now a near-reality. That AGI might lead to ASI - Artificial Super Intelligence - and the ensuing singularity are very well within the realm of possibility.

That is the future.

The now is different.

For all their intelligence, LLMs can yield remarkably poor results if you don’t know how to prompt them. Like all tools, the abilities of the tool user matter. Skilled prompt engineers are able to extract far more value from LLMs than users who simply paste in “create [xyz]”.

Nowhere has this been more apparent than in the very real playground that we built at CoderGF. Users who can parse in a strong prompt get exponentially better quality output - apps and chatbots - than users who lazily type in whatever they can think of.

Behind some of your favorite AI apps and agents, there lurks an extremely detailed system prompt. LLMs are memetic; they can immitate anything as long as the instructions are clear enough.

The prompt, thus, becomes the primary unit of Artificial Intelligence.

And yet, unlike many things in crypto, the prompt remains unappreciated, unmonetized, untokenized.

But things don’t have to be this way.

Tokenizing Prompts

Anyone with any degree of experience with LLMs understands two fundamental truths:

To ascertain the quality of a prompt, thus, you need both the input (prompt) and mediating engine (LLM).

But herein lies a challenge: the mediating engine - LLM - outputs raw data. To interpret this data itself you need an interpretive middleware.

In its most basic form, that’s precisely what we built with CoderGF: an interpretive middleware for code.

You can ask ChatGPT to “create a todo app”, and you will likely get decently well-written, functional code as text. But whether this code creates a functional, high-quality app is indeterminate; you need to copy-paste the code into a real app and deploy it on a server to evaluate the results.

These are skills beyond the average person who can’t code. If you need to “create-react-app” and “npm run dev”  just to see what’s the working output of a prompt, you’ve already lost the majority of your audience.