With Mistral AI fixed price for an infinite amount of API calls as part of the Experiment plan, it's just too easy to build your own automations, composed of userscripts and custom tools to complement your favorite websites.
I've built such tools to save time with some of my major time consuming tasks: replying to LinkedIn requests and emails. This post is just to keep a record of how I do it in 2025, and create inspiration for anyone willing to save time.
It's also to be transparent with recruiters and relationships who might wonder if I'm using AI for anything beyond code.
Examples:
I've built user scripts for Fastmail Web UI to inject generated answers, see screenshot below.
I've also built user scripts for LinkedIn Web UI to:
Suggest replies for the current conversation, without automatically sending them.
Display calendar events and emails conversations related to the contact/job I'm viewing.
I automate other stuff too - monitoring, RSS, accounting - but that's not today's post
Boring digital things should be hands-off. Fun digital things should be hands-on. This also touches some of the ideas expressed in the 4-hour workweek book.
๐I consult on custom automation. My email is below.
๐ The best way to get in touch is via my email morgan at zoemp dot be. You can also follow me on the Fediverse / Mastodon at @sansguidon@mamot.fr. I speak (a lot) French, English and a bit of Dutch.
I've stopped Spotify for a long time and I have switched to Navidrome and Jellyfin/Finamp on Android, which embark scrobblers for Last.fm and ListenBrainz, luckily for me.
They integrate a player compatible with an awesome web-scrobbler extension (https://github.com/web-scrobbler) whose purpose is to work for live radios web players. Sadly I couldn't make it work with those radios despite history is showing it is supposed to have worked years ago. There is an integration for RTBF live radios at https://github.com/web-scrobbler/web-scrobbler/pull/2377/files - it was supposed to work for Classic 21 as well, unfortunately the code seems obsolete now and I couldn't ship a working fix for this. So I paused a bit, and gave up, ... not long enough ๐.
I had the idea of running Shazam in the background and send it findings to some backend for scrobbling my songs, and I thought it would be very epic and complicated. I did try anyway the Shazam web extension and noticed that it would mismatch the song and artist displayed on the radio website most of the time. This is concerning. Why is that? I do not know yet, so just in doubt I have contacted the website admin for RTBF about this.
Anyway, diving back into the code and opening Dev Tools in Brave, helped me discover an interesting URL: https://core-search.radioplayer.cloud/056/qp/v4/events/?rpId=29 which returns the songs being played for this radio, I mean the recent ones, the next one and the one currently playing!
Iโve adopted ListenBrainz as an alternative to Last.FM, but I still canโt fully let go of it. Thatโs why my userscript supports both, and it's hackable for those who want to extend it ๐
Feel free to share, copy, reuse and provide feedback! I'll keep this post updated if RTBF radio answers my questions ๐ or if I get interesting feedback. I've also mentioned this post in https://github.com/web-scrobbler/web-scrobbler/discussions/5327.
๐ The best way to get in touch is via my email morgan at zoemp dot be. You can also follow me on the Fediverse / Mastodon at @sansguidon@mamot.fr. I speak (a lot) French, English and a bit of Dutch.
I recently read Max Woolf's post on LLM use, where he explains why he rarely uses generative LLMs directly, preferring raw APIs for control. It's an interesting take, but I fundamentally disagree. For me, chat interfaces aren't just convenientโtheyโre an essential part of understanding.
LLMs are more than code generators. They are interactive partners. When I use ChatGPT, Mistral, or Copilot in chat mode, it's not just about fast results. It's about exploring ideas, challenging my thinking, and refining concepts. The back-and-forth, the debugging, the reflectionโitโs like pair programming with a tireless assistant. If I need to test an idea or explore a concept, the chat interface is perfect for that: it's always available, from any device, no API or IDE needed.
Max argues APIs allow for more fine-tuningโsystem prompts, temperature control, constraints. Sure. But in a chat session, you can iterate, switch topics, revisit past decisions, and even post-mortem the entire conversation, as a way to learn from it and log your decisions. And yes, I archive everything. I link these sessions to tickets in TickTick to revisit ideas. Try doing that with an API call.
The chat interface is a workspace, not a magic wand. Itโs where you can think, break things, fix them, and learn. Isolating interactions to API calls removes that context, those learning moments. To me, thatโs missing the point.
APIs are fine for deterministic output. But I prefer the chaos of conversationโit forces me to engage deeper, explore failures, and actually think. Thatโs why I donโt just use LLMs to generate. I use them to reason. Not just for hard skills, but soft skills too.
I do love code reviews but I'm convinced they're best done live โ reviewed, merged, communicated immediately.
A few weeks ago I did submit changes through merge requests, and a few weeks later I had completely forgot about their implementation.
I context switched a few times since then....
Today the change was merged by the repository maintainers then a few colleagues were discussing one of its consequences. Only because I was in the same workspace, I did react on time.
My mistake was likely to not have communicated more proactively about the change, likely as I'm not the repository maintainer nor the release maintainer so I had no idea when the change would be merged.
Anyway, there are several such merge requests being queued. All recipes for future headaches.
I reply to all recruiters for a long time. This became a chore over time, so I've developed a userscript that is loaded on every LinkedIn conversation, and calls Mistral AI to generate a reply in my preferred style. With every conversation I open, I hit this button and it will reply adequately, with respect to the history of the conversation, my priorities of the moment, the language of the conversation, the tone, etc.
It is then up to me to post the proposed answer as-is or to discard/edit the proposal
Dropbox storage optimization
As a parent, I developed the habit of archiving digital souvenirs of our kidโs life. Those pics and videos accumulate. As someone very organized, I like to avoid duplicates and also save the correct metadata (EXIF) in our pics and videos, which proves to be challenging with older pics from WhatsApp groups.
I wanted also to ensure that every time I get pictures and videos shared via WhatsApp family groups, I collect them in our Dropbox. This is done via Syncthing-Fork (Android client) and syncthing servers running on my Cloudron and my Macbook Pro. Syncthing monitors all folders that can contain videos or pictures. This helps effortlessly move all my pics/videos
Moves all new pics/videos from Android-monitored folders to Dropbox. As those folders are kept in sync via Syncthing, if I move a pic/video out of the monitored Syncthing folder to Dropbox, it removes the pics/videos from all locations monitored by Syncthing on my Android, so it helps organizing things on Dropbox while also making room on my Android device. Syncthing is set up manually, but itโs easy to manage.
Detect all pics/videos which do not contain a face, this is done using YOLOv5 by Ultralytics. The script was generated via LLM.
Compress/convert pictures and videos using https://ffmpeg.org/ which is installed locally, this achieves saving hundred GBs of data thus can reduce the bandwidth and resources needed for downloading, syncing, displaying such files. On top of this, I require less amount of storage, thus I can keep cheaper Dropbox subscription for longer. LLM generated most of the scripts needed for the task.
Newsletters summaries
As explained in a previous blog (https://morgan.zoemp.be/indieblog/), I'm keeping track of all new blog posts listed in https://indieblog.page/. I first cooked a daily RSS for it, using LLMs. This proved useful but as the list of RSS feeds grows over time, I came to realize I needed to better filter the list, or to limit the output. I don't want the filter to be too random, I want it to be based on my taste, so I created another RSS feed which is a LLM-aided summary of the first RSS feed.
Correcting or translating text
I asked an LLM to review this blog post. The instructions were simple: don't mess with my style, just flag the important fixes. I also asked for a permalink suggestion โ one single English word, ideally fun or Gilfoyle-approved. The result is what youโre reading now. This very paragraph was, in fact, fully translated from a French original ๐ .
What else?
Iโve shown a few cases where LLMs save me time or help improve my productivity. There are many more I havenโt covered, where I use LLMs to generate reusable scripts. Once a working solution is cooked, the LLM is out of the picture โ most of the time. When a script needs to depend on AI to work, I mostly use Mistral: cheaper than OpenAI, and I donโt have to sacrifice my soul (or card) to use it. Automatic replies and content summaries are cases where interactive AI calls make sense. Still, I prefer to limit such usage โ it makes my productivity overly dependent on fragile APIs and unpredictable outputs.