War with small language models

Everyone knows how error-prone, blissfully naive, and illogical large language models can be a times. And so I have a bit of a self-deprecating joke wherein I say I used a small language model to create something - to write some code, to create a document, etc. i.e. my brain is only slightly more intelligent than a bag of bricks. I both believe this and am okay with this. :)

War with small language models

I think what’s fascinating is how I can look around myself and see accomplishments of my past self; whether it be the career I’ve had, the family I’ve rescued, the garden I’ve cared for year-over-year, or code I’ve written in past years that others seem to appreciate more than I do today (a uniquely fascinating feeling in itself.)

Being able to introspect on past accomplishments - and more importantly reflect on my state of mind during the time I was making those achievements - tells me that I’ve achieved great things through two primary means:

  1. Most often through sheer luck or encountering uniquely impactful individuals through environments I put myself in.
  2. By going to war with my small language model, i.e. brain.

In its default state, my small language model would love to sit around, watch mind-numbing YouTube videos, procrastinate, embrace seemingly bi-polar thoughts of anxiety or fear for the future, enjoy the latest hype train or controversial discussion that the tech scene has to offer, neglect chores, etc.

A naive onlooker might be able to observe such a small language model and identify its behavior patterns, where it gets positive and negative feelings, how the context and environment of the day plays into its outputs, etc.

A more intelligent onlooker, however, might be able to observe that although the small language model is.. well, rather naive and frankly stupid - that its training data has something fundamentally strange in it. The daily desires and short-lived ups and downs, which appear frankly erratic, barely influence the long-term desire to so something ultimately great - an achievement like no other.

A drive and almost hard-coded sentiment that underpins everything, enforcing that what the small language model does day to day virtually doesn’t matter, because there must exist a singular spectacular achievement within the lifetime or else there was no lifetime to begin with. A theory that the small language model would have no reason to have been created if it were not for a great achievement towards the end of its function - and so therefor there must exist a great achievement somewhere in its execution despite any localized challenges.

It is only with that sentiment and undertone, that war can exist - that the daily annoyances of the small language models’ limited context, seemingly swaying daily thoughts of positivity and negativity, can all blur away to instead build towards the the drive and passion for a long-term great achievement - an environment where frankly nothing else matters - neatly encapsulated in that small language model.

That’s all for now, folks

The above is something I’ve been trying to find a way to articulate / express for a while, I hope you enjoyed it or find it useful for your own path in life.