ADHD and hacking an ever growing todo list

Are you familiar with FOMO? Surely.

After using TickTick for long enough and judging my stats, I've noticed I was efficient at solving tasks I focused on, even if I was not always focusing on the most important ones. Yet I have this feeling of being overwhelmed by my self imposed todo list. Whatever I do and it doesn't matter how long I work on my todos, the list keeps growing and distracts me from living my life far from screens. There must be rationale explanations and a solutions, right?

I've revisited my backlog hundred times without allowing me to delete those items nor finding the motivation to tackle enough of them. Occasionally I would do some triage, e.g I bump the priority on some favorite tasks by adding custom tags like "x2", x3" etc. It allow me to find such favorite items later and sort them by priority/urgency.

Yet I still had around 400+ tasks, far from my Inbox Zero principle. How to solve that? I did managed to be efficient at email triage, and at automation, coding, problem solving, I could likely win in any board game, I'm a debrouillard, I know. So how do I solve my tiny issue with my todos without depraving myself of sleep, and without recycling my todo items into the thrash?

The key is most of those todos are not urgent and will be solved in the long term, without specific date. They ressemble more the concept of an idea or an inspiration, or motivation, rather than a problem to be solved.

And that's for now my trick to tackle those needed-for-later items in TickTick -> Tag them as "💡ideas" and convert into Note.


Slice the backlog

Je souffre de TDAH et c'est bien galère de prioriser des tâches plutôt que d'autres, tant la passion de tout faire est présente. Parfois pourtant on se retrouve submergé.

Après avoir lu les 4 premiers tomes de la BD Samurai, le cycle de l'empereur et du treisième phophète donc, je me suis remis à gribouiller, un Samurai bien entendu, devant mon fils admiratif (il a quatre ans, évidemment que tout ce que je dessine lui paraît bien ;-p). Bref j'ai eu envie de m'appliquer à la même discipline, à défaut de lames j'opte pour le dégraissage de ma liste de tâches sur TickTick.

Et ça tombe bien, je suis en train de lire La semaine de 4 heures de Tim Ferris, il y a un passage qui m'a fortement marqué, pour aider à faire le tri entre les envies et les choses qu'on doit absolument faire. Je n'ai pu m'empêcher de penser à Steve Jobs qui avait pour réputation d'associer le moins à la perfection et le surplus à une absence de vision.

Bref, j'ai une liste TickTick de plus de 500 tâches, dont certaines reviennent chaque semaine, et chaque jour mon lecteur de flux RSS récupère de nouvelles entrées passionnantes. J'ai également une famille avec laquelle j'essaie de passer du temps de qualité, plein de projets, de livres, séries TV, films dans mes envies. On n'a pas forcément de contrôle sur le temps qui passe, et l'argent va et vient, mais ne résoud pas tout. Par contre je peux choisir où mettre mon énergie.

En conscience de tous les travaux de rénovation qui m'attendent, de la famille qui va s'agrandir d'ici septembre, de mon travail accaparrant en tant qu'indépendant, il me faut vraiment tailler dans le lard.

J'ai donc ajouté une tâche récurrente dans TickTick pour éliminer une dizaine de tâches au quotidien, toute chose qui me prendra du temps et qui n'est qu'une idée/envie, finira sans doute à la poubelle ou devra aller ailleurs que dans TickTick.


A castle made of bazaar

I've accumulated quite a lot of nerd automations in my tech stack, I'll try to give an idea of what I've done up to this day

For the Cloudron instance I run

  • A cron job that monitor disk usage, using bash +Cloudron API, it will alert me via email and ntfy when any folder usage > 75%.
  • A cron job that checks if some apps time out and restart them via Cloudron API. In bash too.

DNS Monitoring

  • I take a snapshot of my Hetzner DNS configs every 5 minutes and watch frequently for diffs using Changedetection.

Uptime monitoring

  • I'm using Uptime Kuma to monitor the status pages of several services/APIs I'm relying on (Dropbox, OpenAI, Mistral AI, ...) as well as my own self-hosted apps. I get ntfy alerts in case any is failing.

Feed generators

Files Syncing

  • A cron job syncing my .torrent files from Dropbox to QBittorrent, using rclone
  • A cron job syncing my downloaded audiobooks to AudioBookShelf, using rsync.
  • A cron job syncing my downloaded ebooks to Calibre, by uploading the files to Calibre API, using bash.

Music management

  • A cron job syncing my downloaded music (torrent) to my main Music Library, using rsync.
  • A cron job verifying the quality of my Music Library content using mp3val and reporting for corrupted files via ntfy.
  • A cron job verifying the quality of my Soulseek download folder using mp3val and only moving the verified ones to my Music Library.
  • A user script integrating with ListenBrainz/LastFM scrobbler for when I listen to live radios from RTBF (they use radioplayer technology).
  • A user script to filter automatically the search results within Soulseek (web version running on Cloudron).

Photos management

  • A cron job that syncs my photos library between Dropbox and Immich, using rclone, but only for pics and videos under a certain size.
  • A cron job that generate Immich album only made of pictures of specific persons.
  • Scripts that I run ad-hoc, using ffmpeg, to compress my pictures, videos, fix their EXIF date at need.
  • Scripts that I run ad-hoc, using Syncthing, to remove all pics/videos from my Phone (WhatsApp and Camera folders) and move them to Dropbox, before I compress and triage them. Anything on Dropbox is then sync to Immich, so that's how I keep my phone clean.

Emails management

  • A script which checks for invoices (with attachments or downloadable links) in my emails and sends them to my Dropbox forwarding email, which in turn backups those attachments in a specific folder which can be treated.

Freelancer paperwork management

  • I have several scripts to rename my receipts and invoices with the right date, invoice nr, provider and organize them per year/quarter/month, it's done in PHP.
  • A user script to fill my timesheet automatically based on my declared days off.

Web curation and bookmark management

  • A cron job that will browse my recent Shaarli shares and, when needed, add tags, HN Thread links, Web Archive link, and a summary. It's done in Python.
  • A cron job that will browse my Miniflux unread entries and mark as read the ones that I will probably not care about. using Mistral AI.
  • A cron job that will browse my Miniflux unread entries and send me an email with the unread entries summarized and grouped by feed, a bit like feu Subworthy.com (by Phil Stephens) was doing around 2022.
  • A user script that will add TLDRs buttons at the bottom of my Miniflux entries, so I can get a quick summary generated by Mistral AI, at need.
  • A user script to warn me on any website if there is a Hacker News thread for the page I visit.
  • A user script to highlight and extract all top links from the current Hacker News thread.

LinkedIn management

  • A user script that adds a reply generator in LinkedIn conversations, using Mistral AI.

Obsidian Backups

  • I'm using aicommit2 called from Obsidian Git plugin to generate meaningful commit messages about what is being backed up.

This looks quite a lot, and that's not all.


redacted.sh: share your logs, not your secrets

Quick post. Sometimes it is necessary to share logs on public issue trackers, forums... and wanting to protect secrets, tokens, IPs is normal.

I've cooked my own minimal bash script for this quest, which I've just added to my public shared snippets : https://gitea.zoemp.be/sansguidon/snippets/raw/branch/main/redacted.sh

#!/usr/bin/env bash

default_rules=(
  's/[0-9]\{1,3\}\(\.[0-9]\{1,3\}\)\{3\}/<REDACTED_IP>/g'
  's/\b[a-zA-Z0-9._-]\+\.[a-zA-Z]\{2,\}\b/<REDACTED_DOMAIN>/g'
  's/\b[A-Za-z0-9+\/=]\{20,\}\b/<REDACTED_TOKEN>/g'
  's/\(password=\)\S\+/\1<REDACTED_PASS>/g'
)

rules=()
while [[ $1 =~ ^s/ ]]; do
  rules+=("$1")
  shift
done
[[ ${#rules[@]} -eq 0 ]] && rules=("${default_rules[@]}")

sed_expr=()
for r in "${rules[@]}"; do
  sed_expr+=( -e "$r" )
done

# If files are passed, process them to stdout.
# If none, read from stdin to stdout.
if [[ $# -gt 0 ]]; then
  sed "${sed_expr[@]}" "$@"
else
  sed "${sed_expr[@]}"
fi

Feel free to reuse, copy, extend, contact me to give feedback! 💚

💌 The best way to get in touch is via my email morgan at zoemp dot be. You can also follow me on the Fediverse / Mastodon at @sansguidon@mamot.fr. I speak (a lot) French, English and a bit of Dutch.


LLMs – Chat Interfaces vs. Raw APIs: Why I Choose Conversations

I recently read Max Woolf's post on LLM use, where he explains why he rarely uses generative LLMs directly, preferring raw APIs for control. It's an interesting take, but I fundamentally disagree. For me, chat interfaces aren't just convenient—they’re an essential part of understanding.

LLMs are more than code generators. They are interactive partners. When I use ChatGPT, Mistral, or Copilot in chat mode, it's not just about fast results. It's about exploring ideas, challenging my thinking, and refining concepts. The back-and-forth, the debugging, the reflection—it’s like pair programming with a tireless assistant. If I need to test an idea or explore a concept, the chat interface is perfect for that: it's always available, from any device, no API or IDE needed.

Max argues APIs allow for more fine-tuning—system prompts, temperature control, constraints. Sure. But in a chat session, you can iterate, switch topics, revisit past decisions, and even post-mortem the entire conversation, as a way to learn from it and log your decisions. And yes, I archive everything. I link these sessions to tickets in TickTick to revisit ideas. Try doing that with an API call.

The chat interface is a workspace, not a magic wand. It’s where you can think, break things, fix them, and learn. Isolating interactions to API calls removes that context, those learning moments. To me, that’s missing the point.

APIs are fine for deterministic output. But I prefer the chaos of conversation—it forces me to engage deeper, explore failures, and actually think. That’s why I don’t just use LLMs to generate. I use them to reason. Not just for hard skills, but soft skills too.

Mentioned in https://lukealexdavis.co.uk/posts/apis-vs-chatbots/


Mastodon