LLMified

Saving time and storage (with style)

LinkedIn

I reply to all recruiters for a long time. This became a chore over time, so I've developed a userscript that is loaded on every LinkedIn conversation, and calls Mistral AI to generate a reply in my preferred style. With every conversation I open, I hit this button and it will reply adequately, with respect to the history of the conversation, my priorities of the moment, the language of the conversation, the tone, etc.

It is then up to me to post the proposed answer as-is or to discard/edit the proposal

Dropbox storage optimization

As a parent, I developed the habit of archiving digital souvenirs of our kid’s life. Those pics and videos accumulate. As someone very organized, I like to avoid duplicates and also save the correct metadata (EXIF) in our pics and videos, which proves to be challenging with older pics from WhatsApp groups.

I wanted also to ensure that every time I get pictures and videos shared via WhatsApp family groups, I collect them in our Dropbox. This is done via Syncthing-Fork (Android client) and syncthing servers running on my Cloudron and my Macbook Pro. Syncthing monitors all folders that can contain videos or pictures. This helps effortlessly move all my pics/videos

  • Moves all new pics/videos from Android-monitored folders to Dropbox. As those folders are kept in sync via Syncthing, if I move a pic/video out of the monitored Syncthing folder to Dropbox, it removes the pics/videos from all locations monitored by Syncthing on my Android, so it helps organizing things on Dropbox while also making room on my Android device. Syncthing is set up manually, but it’s easy to manage.
  • Detect all pics/videos which do not contain a face, this is done using YOLOv5 by Ultralytics. The script was generated via LLM.
  • Remove duplicates via some scripts (generated by LLM) or via https://github.com/arsenetar/dupeguru (manually, via UI).
  • Compress/convert pictures and videos using https://ffmpeg.org/ which is installed locally, this achieves saving hundred GBs of data thus can reduce the bandwidth and resources needed for downloading, syncing, displaying such files. On top of this, I require less amount of storage, thus I can keep cheaper Dropbox subscription for longer. LLM generated most of the scripts needed for the task.

Newsletters summaries

As explained in a previous blog (https://morgan.zoemp.be/indieblog/), I'm keeping track of all new blog posts listed in https://indieblog.page/. I first cooked a daily RSS for it, using LLMs. This proved useful but as the list of RSS feeds grows over time, I came to realize I needed to better filter the list, or to limit the output. I don't want the filter to be too random, I want it to be based on my taste, so I created another RSS feed which is a LLM-aided summary of the first RSS feed.

Correcting or translating text

I asked an LLM to review this blog post. The instructions were simple: don't mess with my style, just flag the important fixes. I also asked for a permalink suggestion — one single English word, ideally fun or Gilfoyle-approved. The result is what you’re reading now. This very paragraph was, in fact, fully translated from a French original 🙂 .

What else?

I’ve shown a few cases where LLMs save me time or help improve my productivity. There are many more I haven’t covered, where I use LLMs to generate reusable scripts. Once a working solution is cooked, the LLM is out of the picture — most of the time. When a script needs to depend on AI to work, I mostly use Mistral: cheaper than OpenAI, and I don’t have to sacrifice my soul (or card) to use it. Automatic replies and content summaries are cases where interactive AI calls make sense. Still, I prefer to limit such usage — it makes my productivity overly dependent on fragile APIs and unpredictable outputs.

Productivity monk

I have taken a few habits recently:

  • Inbox zero by bedtime. Unhandled mails go to TickTick.
  • Tasks default to next week. If they matter, they’ll wait.
  • One work task per day. If it drags, I commit or kill it.
  • Articles get bookmarked. Read later—or never. Doesn’t matter.
  • Tasks get automated. Or ignored.
  • Midnight is my hard stop. Usually...
  • Everything goes in TickTick.
  • No date = no task. No surprises.
  • Task and blog ideas are dumped into TickTick as notes, voice or text.
  • LLMs get a few hours. That’s it. And only for automation.
  • LinkedIn runs on auto-reply.
  • Same rules at home and work. One brain. Scripts everywhere.
  • I keep folders of tabs—Wednesday, Friday, Daily. I open them when it’s time. Not before.
  • I use browser userscripts to bend websites to my will. UX included.
  • Family runs on self-service. Automation takes care of the rest.
  • And a few things don’t change—only improve: Backups and monitoring for everything. Unit tests for all my scripts. And pipelines. Obviously.

This isn’t a system. It’s survival. Simplicity is the only thing that scales, especially with kids and ADHD.

Lazy coding done right

When performing a task, even if it's a one-off, I believe it's a mistake to approach it with a short-term vision. We might solve many problems just for one occasion, but that's the nature of any project; each one is unique and happens only once. Should we really ship it with a short-term vision and skip automation, documentation, and testing just because they require more effort? Should we be so lazy?

I believe being lazy can be a good thing, but we shouldn't be so lazy that we skip essential steps like testing, documentation, information processing and retention, automation, collaboration, asking questions, and improving processes. These efforts will surely pay off for our knowledge and efficiency, or if not for ours, then for the team's knowledge and efficiency, for maintainability, and to reduce tech debt and bus factor. Of course, that requires thinking like a chess player, calculating our moves, and doing some planning.

For example, during an infrastructure migration, we faced many integrations that needed testing from the ops side, yet we lacked internal QA expertise. We rushed to collaborate with others to gather information on testing their flows. Even though we didn't have time to write automated tests immediately, I documented all the information so we could use it for future monitoring and automation purposes.

Another instance is our workspace setups, which were initially painful, yet many developers accepted it. When I had to use their setup script, I found many outdated or irrelevant steps and undocumented instructions. I automated those steps and fixed the instructions. Now, newcomers recognize the value of this, and even people external to the project can set it up without prior knowledge, saving days of effort and enabling easier contributions.

When I have to do a task, even just once, I try to make a script and document it. If I have to do it more than once, I try to automate even more parts. Each time I return to a task I'm becoming too familiar with, I feel the urge to improve the process, document and automate it, make it configurable, flexible, shared, robust, and efficient. When performing repetitive operations, I script them, always. At least partially if not fully. If mistakes occur, the fixes are likely to be automated or documented.

When asking for help, I provide context, and when doing something useful, I share the news or update the docs so people can be aware and benefit from it. I often receive positive feedback for this approach, as people start to realize that automation is more efficient, forgiving, and less risky than repeating operations manually.

Sometimes it's more tempting and fun to spend two days automating something rather than spending an hour running an imperfect automation hundreds of times. You have to balance the pros and cons of fun versus efficiency, depending on your priorities. However, it's risky to automate something you don't fully understand. You need to acquire some level of expertise in a topic before embracing automation and contributing improvements. It's dangerous to automate something you don't know how to run, debug, or test manually.

LLMs can help automate tasks, but you should be able to code and debug without them. You need to perform a task fully without LLMs or scripting before attempting to script it. It's good practice to study the documentation, API, and man pages of the tools you use and learn to troubleshoot effectively.

I believe every second counts, and even if we're solving a unique problem for the first time, the knowledge gained is worth documenting. This way, we can extract a process or history from it, which can be helpful for us, our team, or anyone else if shared.

Prompting tips for maintenance tasks – Part 2

This blog is also a living document for myself so I can improve and reference this working pattern in the future.

Model selection

If using ChatGPT for coding tasks, especially maintenance tasks, opt for o1, other models are crap and will hallucinate or forget more of the original code.

Avoiding regressions

If the goal is to alter existing code, ChatGPT risks breaking existing code by removing as part of the end result, a good prompt is something like below, to reduce the amount of feedback loop and future debugging. This is the prompt I use when I want to optimize for retro-compatibility and avoid too much diff between old and new code.


[SPEC, aka insert-here your own description of your problem to be solved by ChatGPT/LLM, with instructions, followed by the text below]

Here is the code to be modified. I want the complete code as the final result, with the same number of functions as in the original.

Show me the final file with the modifications
.
Do not omit code for brevity, keep untouched code same as before, with no new comments and no change to existing code styling, syntax, indentation and comments.

[CODE, aka Insert here the original code to be maintained/modified/debugged by the LLM]

When ChatGPT is done with the code generation, I output the original and newer versions of the code in something like my favorite diff viewer for code changes, or something like https://www.diffchecker.com/text-compare/, and I review the differences. When it seems good, I test, then I commit.

See also


Flow

  • Cannot really stop coding while in a task. Coding is fun.
  • Cannot really stop improving after solving a task. I'm in the flow and each improvement unlocks another.
  • Cannot really stop bug fixing before solving it. Interruption is frustrating and I need to kill this problem.

Related

Continuous improvement is addictive because of the desire to reach perfection and repair the broken window (cfr Broken window theory).

Thanks to Vincent L. for the thoughts that inspired this blog.