Hands-Offย LinkedInย andย Email

With Mistral AI fixed price for an infinite amount of API calls as part of the Experiment plan, it's just too easy to build your own automations, composed of userscripts and custom tools to complement your favorite websites.

I've built such tools to save time with some of my major time consuming tasks: replying to LinkedIn requests and emails. This post is just to keep a record of how I do it in 2025, and create inspiration for anyone willing to save time.

It's also to be transparent with recruiters and relationships who might wonder if I'm using AI for anything beyond code.

Examples:

I've built user scripts for Fastmail Web UI to inject generated answers, see screenshot below.

A screenshot of the auto-reply button in Fastmail Web UI

I've also built user scripts for LinkedIn Web UI to:

  1. Suggest replies for the current conversation, without automatically sending them.
  2. Display calendar events and emails conversations related to the contact/job I'm viewing.
A screenshot of the auto-reply / Emails buttons in LinkedIn Web UI

I automate other stuff too - monitoring, RSS, accounting - but that's not today's post

Boring digital things should be hands-off. Fun digital things should be hands-on. This also touches some of the ideas expressed in the 4-hour workweek book.

๐Ÿ’I consult on custom automation. My email is below.


๐Ÿ’Œ The best way to get in touch is via my email morgan at zoemp dot be. You can also follow me on the Fediverse / Mastodon at @sansguidon@mamot.fr. I speak (a lot) French, English and a bit of Dutch.

Scrobbling RTBF live radios to Last.fm and ListenBrainz with userscripts

Finding a way to scrobble live RTBF radios like Classic 21 and Classic 21 Metal to Last.fm and ListenBrainz was not as straightforward as it should have been.

I've stopped Spotify for a long time and I have switched to Navidrome and Jellyfin/Finamp on Android, which embark scrobblers for Last.fm and ListenBrainz, luckily for me.

However in the universe of music scrobblers, I had yet to find how to scrobble my favorite local radios, RTBF Classic21 and Classic21 Metal (https://www.rtbf.be/radio/liveradio/classic21_metal) ๐ŸŽธ๐Ÿ‡ง๐Ÿ‡ช.

They integrate a player compatible with an awesome web-scrobbler extension (https://github.com/web-scrobbler) whose purpose is to work for live radios web players. Sadly I couldn't make it work with those radios despite history is showing it is supposed to have worked years ago. There is an integration for RTBF live radios at https://github.com/web-scrobbler/web-scrobbler/pull/2377/files - it was supposed to work for Classic 21 as well, unfortunately the code seems obsolete now and I couldn't ship a working fix for this. So I paused a bit, and gave up, ... not long enough ๐Ÿ˜‚.

I had the idea of running Shazam in the background and send it findings to some backend for scrobbling my songs, and I thought it would be very epic and complicated. I did try anyway the Shazam web extension and noticed that it would mismatch the song and artist displayed on the radio website most of the time. This is concerning. Why is that? I do not know yet, so just in doubt I have contacted the website admin for RTBF about this.

Anyway, diving back into the code and opening Dev Tools in Brave, helped me discover an interesting URL: https://core-search.radioplayer.cloud/056/qp/v4/events/?rpId=29 which returns the songs being played for this radio, I mean the recent ones, the next one and the one currently playing!

Then I decided to try hacking my own solution for this problem. It wouldn't work with Navidrome nor with a chrome extension, I wanted a simple minimalist hack. I'm used to tweak websites UX with Tampermonkey and I'd to give a try. The result is https://gitea.zoemp.be/sansguidon/snippets/raw/branch/main/tampermonkey/rtbf_scrobbler.js.

Iโ€™ve adopted ListenBrainz as an alternative to Last.FM, but I still canโ€™t fully let go of it. Thatโ€™s why my userscript supports both, and it's hackable for those who want to extend it ๐Ÿ˜‰

I've tested it on:

And it should likely be compatible with most radios at:

My scrobbling profiles are https://www.last.fm/user/SansGUidon and https://listenbrainz.org/user/SansGuidon/.

Feel free to share, copy, reuse and provide feedback! I'll keep this post updated if RTBF radio answers my questions ๐Ÿ˜‰ or if I get interesting feedback. I've also mentioned this post in https://github.com/web-scrobbler/web-scrobbler/discussions/5327.

๐Ÿ’Œ The best way to get in touch is via my email morgan at zoemp dot be. You can also follow me on the Fediverse / Mastodon at @sansguidon@mamot.fr. I speak (a lot) French, English and a bit of Dutch.


LLMs – Chat Interfaces vs. Raw APIs: Why I Choose Conversations

I recently read Max Woolf's post on LLM use, where he explains why he rarely uses generative LLMs directly, preferring raw APIs for control. It's an interesting take, but I fundamentally disagree. For me, chat interfaces aren't just convenientโ€”theyโ€™re an essential part of understanding.

LLMs are more than code generators. They are interactive partners. When I use ChatGPT, Mistral, or Copilot in chat mode, it's not just about fast results. It's about exploring ideas, challenging my thinking, and refining concepts. The back-and-forth, the debugging, the reflectionโ€”itโ€™s like pair programming with a tireless assistant. If I need to test an idea or explore a concept, the chat interface is perfect for that: it's always available, from any device, no API or IDE needed.

Max argues APIs allow for more fine-tuningโ€”system prompts, temperature control, constraints. Sure. But in a chat session, you can iterate, switch topics, revisit past decisions, and even post-mortem the entire conversation, as a way to learn from it and log your decisions. And yes, I archive everything. I link these sessions to tickets in TickTick to revisit ideas. Try doing that with an API call.

The chat interface is a workspace, not a magic wand. Itโ€™s where you can think, break things, fix them, and learn. Isolating interactions to API calls removes that context, those learning moments. To me, thatโ€™s missing the point.

APIs are fine for deterministic output. But I prefer the chaos of conversationโ€”it forces me to engage deeper, explore failures, and actually think. Thatโ€™s why I donโ€™t just use LLMs to generate. I use them to reason. Not just for hard skills, but soft skills too.

Mentioned in https://lukealexdavis.co.uk/posts/apis-vs-chatbots/


Zombiemerge

I do love code reviews but I'm convinced they're best done live โ€” reviewed, merged, communicated immediately.

A few weeks ago I did submit changes through merge requests, and a few weeks later I had completely forgot about their implementation.

I context switched a few times since then....

Today the change was merged by the repository maintainers then a few colleagues were discussing one of its consequences. Only because I was in the same workspace, I did react on time.

My mistake was likely to not have communicated more proactively about the change, likely as I'm not the repository maintainer nor the release maintainer so I had no idea when the change would be merged.

Anyway, there are several such merge requests being queued. All recipes for future headaches.


LLMified

Saving time and storage (with style)

LinkedIn

I reply to all recruiters for a long time. This became a chore over time, so I've developed a userscript that is loaded on every LinkedIn conversation, and calls Mistral AI to generate a reply in my preferred style. With every conversation I open, I hit this button and it will reply adequately, with respect to the history of the conversation, my priorities of the moment, the language of the conversation, the tone, etc.

It is then up to me to post the proposed answer as-is or to discard/edit the proposal

Dropbox storage optimization

As a parent, I developed the habit of archiving digital souvenirs of our kidโ€™s life. Those pics and videos accumulate. As someone very organized, I like to avoid duplicates and also save the correct metadata (EXIF) in our pics and videos, which proves to be challenging with older pics from WhatsApp groups.

I wanted also to ensure that every time I get pictures and videos shared via WhatsApp family groups, I collect them in our Dropbox. This is done via Syncthing-Fork (Android client) and syncthing servers running on my Cloudron and my Macbook Pro. Syncthing monitors all folders that can contain videos or pictures. This helps effortlessly move all my pics/videos

  • Moves all new pics/videos from Android-monitored folders to Dropbox. As those folders are kept in sync via Syncthing, if I move a pic/video out of the monitored Syncthing folder to Dropbox, it removes the pics/videos from all locations monitored by Syncthing on my Android, so it helps organizing things on Dropbox while also making room on my Android device. Syncthing is set up manually, but itโ€™s easy to manage.
  • Detect all pics/videos which do not contain a face, this is done using YOLOv5 by Ultralytics. The script was generated via LLM.
  • Remove duplicates via some scripts (generated by LLM) or via https://github.com/arsenetar/dupeguru (manually, via UI).
  • Compress/convert pictures and videos using https://ffmpeg.org/ which is installed locally, this achieves saving hundred GBs of data thus can reduce the bandwidth and resources needed for downloading, syncing, displaying such files. On top of this, I require less amount of storage, thus I can keep cheaper Dropbox subscription for longer. LLM generated most of the scripts needed for the task.

Newsletters summaries

As explained in a previous blog (https://morgan.zoemp.be/indieblog/), I'm keeping track of all new blog posts listed in https://indieblog.page/. I first cooked a daily RSS for it, using LLMs. This proved useful but as the list of RSS feeds grows over time, I came to realize I needed to better filter the list, or to limit the output. I don't want the filter to be too random, I want it to be based on my taste, so I created another RSS feed which is a LLM-aided summary of the first RSS feed.

Correcting or translating text

I asked an LLM to review this blog post. The instructions were simple: don't mess with my style, just flag the important fixes. I also asked for a permalink suggestion โ€” one single English word, ideally fun or Gilfoyle-approved. The result is what youโ€™re reading now. This very paragraph was, in fact, fully translated from a French original ๐Ÿ™‚ .

What else?

Iโ€™ve shown a few cases where LLMs save me time or help improve my productivity. There are many more I havenโ€™t covered, where I use LLMs to generate reusable scripts. Once a working solution is cooked, the LLM is out of the picture โ€” most of the time. When a script needs to depend on AI to work, I mostly use Mistral: cheaper than OpenAI, and I donโ€™t have to sacrifice my soul (or card) to use it. Automatic replies and content summaries are cases where interactive AI calls make sense. Still, I prefer to limit such usage โ€” it makes my productivity overly dependent on fragile APIs and unpredictable outputs.


Mastodon