redacted.sh: share your logs, not your secrets

Quick post. Sometimes it is necessary to share logs on public issue trackers, forums... and wanting to protect secrets, tokens, IPs is normal.

I've cooked my own minimal bash script for this quest, which I've just added to my public shared snippets : https://gitea.zoemp.be/sansguidon/snippets/raw/branch/main/redacted.sh

#!/usr/bin/env bash

default_rules=(
  's/[0-9]\{1,3\}\(\.[0-9]\{1,3\}\)\{3\}/<REDACTED_IP>/g'
  's/\b[a-zA-Z0-9._-]\+\.[a-zA-Z]\{2,\}\b/<REDACTED_DOMAIN>/g'
  's/\b[A-Za-z0-9+\/=]\{20,\}\b/<REDACTED_TOKEN>/g'
  's/\(password=\)\S\+/\1<REDACTED_PASS>/g'
)

rules=()
while [[ $1 =~ ^s/ ]]; do
  rules+=("$1")
  shift
done
[[ ${#rules[@]} -eq 0 ]] && rules=("${default_rules[@]}")

sed_expr=()
for r in "${rules[@]}"; do
  sed_expr+=( -e "$r" )
done

# If files are passed, process them to stdout.
# If none, read from stdin to stdout.
if [[ $# -gt 0 ]]; then
  sed "${sed_expr[@]}" "$@"
else
  sed "${sed_expr[@]}"
fi

Feel free to reuse, copy, extend, contact me to give feedback! 💚

💌 The best way to get in touch is via my email morgan at zoemp dot be. You can also follow me on the Fediverse / Mastodon at @sansguidon@mamot.fr. I speak (a lot) French, English and a bit of Dutch.


LLMs – Chat Interfaces vs. Raw APIs: Why I Choose Conversations

I recently read Max Woolf's post on LLM use, where he explains why he rarely uses generative LLMs directly, preferring raw APIs for control. It's an interesting take, but I fundamentally disagree. For me, chat interfaces aren't just convenient—they’re an essential part of understanding.

LLMs are more than code generators. They are interactive partners. When I use ChatGPT, Mistral, or Copilot in chat mode, it's not just about fast results. It's about exploring ideas, challenging my thinking, and refining concepts. The back-and-forth, the debugging, the reflection—it’s like pair programming with a tireless assistant. If I need to test an idea or explore a concept, the chat interface is perfect for that: it's always available, from any device, no API or IDE needed.

Max argues APIs allow for more fine-tuning—system prompts, temperature control, constraints. Sure. But in a chat session, you can iterate, switch topics, revisit past decisions, and even post-mortem the entire conversation, as a way to learn from it and log your decisions. And yes, I archive everything. I link these sessions to tickets in TickTick to revisit ideas. Try doing that with an API call.

The chat interface is a workspace, not a magic wand. It’s where you can think, break things, fix them, and learn. Isolating interactions to API calls removes that context, those learning moments. To me, that’s missing the point.

APIs are fine for deterministic output. But I prefer the chaos of conversation—it forces me to engage deeper, explore failures, and actually think. That’s why I don’t just use LLMs to generate. I use them to reason. Not just for hard skills, but soft skills too.


LLMified

Saving time and storage (with style)

LinkedIn

I reply to all recruiters for a long time. This became a chore over time, so I've developed a userscript that is loaded on every LinkedIn conversation, and calls Mistral AI to generate a reply in my preferred style. With every conversation I open, I hit this button and it will reply adequately, with respect to the history of the conversation, my priorities of the moment, the language of the conversation, the tone, etc.

It is then up to me to post the proposed answer as-is or to discard/edit the proposal

Dropbox storage optimization

As a parent, I developed the habit of archiving digital souvenirs of our kid’s life. Those pics and videos accumulate. As someone very organized, I like to avoid duplicates and also save the correct metadata (EXIF) in our pics and videos, which proves to be challenging with older pics from WhatsApp groups.

I wanted also to ensure that every time I get pictures and videos shared via WhatsApp family groups, I collect them in our Dropbox. This is done via Syncthing-Fork (Android client) and syncthing servers running on my Cloudron and my Macbook Pro. Syncthing monitors all folders that can contain videos or pictures. This helps effortlessly move all my pics/videos

  • Moves all new pics/videos from Android-monitored folders to Dropbox. As those folders are kept in sync via Syncthing, if I move a pic/video out of the monitored Syncthing folder to Dropbox, it removes the pics/videos from all locations monitored by Syncthing on my Android, so it helps organizing things on Dropbox while also making room on my Android device. Syncthing is set up manually, but it’s easy to manage.
  • Detect all pics/videos which do not contain a face, this is done using YOLOv5 by Ultralytics. The script was generated via LLM.
  • Remove duplicates via some scripts (generated by LLM) or via https://github.com/arsenetar/dupeguru (manually, via UI).
  • Compress/convert pictures and videos using https://ffmpeg.org/ which is installed locally, this achieves saving hundred GBs of data thus can reduce the bandwidth and resources needed for downloading, syncing, displaying such files. On top of this, I require less amount of storage, thus I can keep cheaper Dropbox subscription for longer. LLM generated most of the scripts needed for the task.

Newsletters summaries

As explained in a previous blog (https://morgan.zoemp.be/indieblog/), I'm keeping track of all new blog posts listed in https://indieblog.page/. I first cooked a daily RSS for it, using LLMs. This proved useful but as the list of RSS feeds grows over time, I came to realize I needed to better filter the list, or to limit the output. I don't want the filter to be too random, I want it to be based on my taste, so I created another RSS feed which is a LLM-aided summary of the first RSS feed.

Correcting or translating text

I asked an LLM to review this blog post. The instructions were simple: don't mess with my style, just flag the important fixes. I also asked for a permalink suggestion — one single English word, ideally fun or Gilfoyle-approved. The result is what you’re reading now. This very paragraph was, in fact, fully translated from a French original 🙂 .

What else?

I’ve shown a few cases where LLMs save me time or help improve my productivity. There are many more I haven’t covered, where I use LLMs to generate reusable scripts. Once a working solution is cooked, the LLM is out of the picture — most of the time. When a script needs to depend on AI to work, I mostly use Mistral: cheaper than OpenAI, and I don’t have to sacrifice my soul (or card) to use it. Automatic replies and content summaries are cases where interactive AI calls make sense. Still, I prefer to limit such usage — it makes my productivity overly dependent on fragile APIs and unpredictable outputs.


Productivity monk

I have taken a few habits recently:

  • Inbox zero by bedtime. Unhandled mails go to TickTick.
  • Tasks default to next week. If they matter, they’ll wait.
  • One work task per day. If it drags, I commit or kill it.
  • Articles get bookmarked. Read later—or never. Doesn’t matter.
  • Tasks get automated. Or ignored.
  • Midnight is my hard stop. Usually...
  • Everything goes in TickTick.
  • No date = no task. No surprises.
  • Task and blog ideas are dumped into TickTick as notes, voice or text.
  • LLMs get a few hours. That’s it. And only for automation.
  • LinkedIn runs on auto-reply.
  • Same rules at home and work. One brain. Scripts everywhere.
  • I keep folders of tabs—Wednesday, Friday, Daily. I open them when it’s time. Not before.
  • I use browser userscripts to bend websites to my will. UX included.
  • Family runs on self-service. Automation takes care of the rest.
  • And a few things don’t change—only improve: Backups and monitoring for everything. Unit tests for all my scripts. And pipelines. Obviously.

This isn’t a system. It’s survival. Simplicity is the only thing that scales, especially with kids and ADHD.


Lazy coding done right

When performing a task, even if it's a one-off, I believe it's a mistake to approach it with a short-term vision. We might solve many problems just for one occasion, but that's the nature of any project; each one is unique and happens only once. Should we really ship it with a short-term vision and skip automation, documentation, and testing just because they require more effort? Should we be so lazy?

I believe being lazy can be a good thing, but we shouldn't be so lazy that we skip essential steps like testing, documentation, information processing and retention, automation, collaboration, asking questions, and improving processes. These efforts will surely pay off for our knowledge and efficiency, or if not for ours, then for the team's knowledge and efficiency, for maintainability, and to reduce tech debt and bus factor. Of course, that requires thinking like a chess player, calculating our moves, and doing some planning.

For example, during an infrastructure migration, we faced many integrations that needed testing from the ops side, yet we lacked internal QA expertise. We rushed to collaborate with others to gather information on testing their flows. Even though we didn't have time to write automated tests immediately, I documented all the information so we could use it for future monitoring and automation purposes.

Another instance is our workspace setups, which were initially painful, yet many developers accepted it. When I had to use their setup script, I found many outdated or irrelevant steps and undocumented instructions. I automated those steps and fixed the instructions. Now, newcomers recognize the value of this, and even people external to the project can set it up without prior knowledge, saving days of effort and enabling easier contributions.

When I have to do a task, even just once, I try to make a script and document it. If I have to do it more than once, I try to automate even more parts. Each time I return to a task I'm becoming too familiar with, I feel the urge to improve the process, document and automate it, make it configurable, flexible, shared, robust, and efficient. When performing repetitive operations, I script them, always. At least partially if not fully. If mistakes occur, the fixes are likely to be automated or documented.

When asking for help, I provide context, and when doing something useful, I share the news or update the docs so people can be aware and benefit from it. I often receive positive feedback for this approach, as people start to realize that automation is more efficient, forgiving, and less risky than repeating operations manually.

Sometimes it's more tempting and fun to spend two days automating something rather than spending an hour running an imperfect automation hundreds of times. You have to balance the pros and cons of fun versus efficiency, depending on your priorities. However, it's risky to automate something you don't fully understand. You need to acquire some level of expertise in a topic before embracing automation and contributing improvements. It's dangerous to automate something you don't know how to run, debug, or test manually.

LLMs can help automate tasks, but you should be able to code and debug without them. You need to perform a task fully without LLMs or scripting before attempting to script it. It's good practice to study the documentation, API, and man pages of the tools you use and learn to troubleshoot effectively.

I believe every second counts, and even if we're solving a unique problem for the first time, the knowledge gained is worth documenting. This way, we can extract a process or history from it, which can be helpful for us, our team, or anyone else if shared.

Mastodon