Magic is problematic

I deal with computers, hence I want things to work the most boring and reliable way possible, with automation, procedures, scripts, not through magic.

Hence, while I love tools such as Atuin, I've a problem with their slogan "Making your shell magical" and generally speaking with any product using such selling argument, especially AI LLM-based products.

For this reason, I'm usually against any kind of black box and one-for-everything tools and platforms that want to ease our lives by hiding the complexities. I think that the only result we get out of those abstractions is complexity, pain, and a culture of incompetence and dependability. I mean, if you want to deal with technology, at least you should understand it.

In the end, it's not all magic [1] [2], but it can feel magic for sure once we lack understanding. Magic feels shiny and appealing after all, its antonyms say it all.

Let encourage everyday users to look inside the box and understand what is happening, even if not everyone is going to switch to Linux, even if debugging is not easy.

At work, there exist an onboarding procedure targeted towards new developers in the team, and this procedure relies on scripts which were left untouched for way too long. The bad things : the procedure is broken but nobody dares to fix it, instead the old timers in the team share dirty hacks and workarounds with the newcomers.

Once you face such problem, the only solution is to address the root causes, not the symptoms. I choose to take a look at the procedure, run it again and again after each attempted improvement, cut it piece by piece, shred or rewrite what seems unreliable and suspicious.

Magic exists, but I’ve never seen any in software. Problems are logical. Nothing is impossible. You can solve this problem.1

As a software engineer, please don't fix symptoms. Don't get too used to deal with crap and unsolved problems. Don't be lazy, don't accept the status quo, make the hard work to understand and solve the problems. Set your focus on understanding things deeper. Enhance your and everyone's knowledge. Be a firefighter against ignorance, and help educate your peers to be better at understanding why things work or doesn't.

Thank you.

HN Comments.

  1. https://catskull.net/thoughts-on-debugging.html β†©οΈŽ

Prompt hacks aka make the best coding out of GPT

I've been using a paid subscription to ChatGPT Plus since May 2023 and I'm still happy with the usage. It's one of those tools you have to master to get most of the time spent with it. There are a lot of benefits to use it, of course there are also negative aspects and bugs. Some of them can be mitigated, and that's the purpose of the current post.

Disclaimer / Context of use 🧾

  • I'm ChatGPT mostly for scripts and each conversation is focused on singular topics, i.e single-file codebases if possible.
  • I don't use Open API Keys and I'm not interested to use OpenAI directly. I've been on a paid plan for OpenAI and was charged way too much for my needs. I advise anyone to use ChatGPT Plus which remain to this day worth it and good enough both for personal and professional.
  • This article was not written nor reviewed by AI/ChatGPT.

Lessons learned - Do βœ…

  • Test test test after each change. ChatGPT will make you lose faith by repeating the same mistakes, by removing sensitive code, by providing incomplete solutions, always verify.
  • Before running or committing anything generated by ChatGPT, always compare with the previous situation you had. Git diff the result of changes. If you have any doubt about a specific change, submit the git diff to ChatGPT for review and ask for explanations while also explaining why you have doubt (failing tests, errors at runtime, weird code removal, ...).
  • Tell ChatGPT to always decouple code in functions.
  • Instruct to use minimalist code, and avoid comments in code, respect your specific style (if you provide examples), instruct to write the code that takes the least amount of lines and space and to avoid long functions.
  • Keep one conversation per context / type of problem / flow. Once done with the problem, validate the solution (test!), commit locally, and delete the conversation to keep your Chat history clean.
  • Try to limit ChatGPT's attention to one single file or one single function at a time, to improve its speed and accuracy. I've experienced inconsistencies and code regressions when expecting ChatGPT to work with too many files and too many changes at once on too many files, as then ChatGPT.
  • If you have lot of work you expect ChatGPT to perform for you, be patient and request one change at a time, then review the outcome, and only if it's validated, move to the next. ChatGPT sucks at multitasking. Keep the rest of your TODO list somewhere else to keep track of what remains to be requested from ChatGPT.
  • You will be given better results if your tech stack relies on popular libraries and programming languages so keep that in mind and be prepared to deal with more mistakes if you pick niche programming languages.
  • Ask to only output the code that require change, e.g add this at the end of your prompt :
    just output the function(s) needing change
  • If you really want to multitask, open multiple ChatGPT sessions (browser tabs) in parallel, each focused on distinct changes, so you can multitask while ChatGPT stays focused on single tasks πŸ™‚
  • One change at a time. If ChatGPT keeps making error, step back, restart from to the latest stable version of your code, and ask to make minimal changes, step by step, while you validate each increment.
  • When happy with your changes, ask ChatGPT to improve its performance or clean the code.
  • When debugging your code with ChatGPT, provide stack traces, inputs, outputs and even complete your input with the copy of the code that seems buggy (based on the stack trace).
  • Be suspicious if ChatGPT changes your code way too much or introduce weird changes to dependencies. Review, check, investigate.

Lessons learned - Avoid ❌

  • Don't trust ChatGPT outputs. OpenAI is known for dreaming and also tends to complexify solutions to simple problems.
  • Don't run nor commit anything generated by ChatGPT that you don't understand and always compare ChatGPT's solution with what you had before.
  • Don't ask too many improvements or bug fixes at once or be prepared to deal with many new errors and regressions.
  • Don't keep conversations for too long, as all the initial context will likely be forgotten about by ChatGPT and you will suffer, also it will cause a lot of scrolling and augment the size of the web page which will be slower to load and will likely crash your browser tab at some point.
  • Don't switch context / files / problems in the same conversation. It's a waste of time and you will suffer later when trying to source specific content or make sense of anything.
  • Don't share secrets/passwords with ChatGPT.
  • Don't waste your time with ChatGPT if it seems unable to solve your problem. Either your problem is too big, either your input is crap, anyway you will likely move faster by starting from a fresh conversation with a smaller problem or by tackling a specific part of the problem on your own. Be confident in your own abilities. Don't let ChatGPT turn you to an idiot.
  • Don't expect ChatGPT to be as efficient on big codebases and complex problems as it is on small scripts and simple problems. So use it more often for the latter and keep the fun of solving the big problems yourself.
  • Don't expect ChatGPT to understand a single thing you do nor why he generates his code. it's a dumb machine without creativity built in. It has to be treated as such and with caution.
  • Don't expect ChatGPT to run well and fast on huge bloated scripts. That should encourage you to decouple your code into functions and modules and specialized files/modules/components/...

Relevant references

  • I consider ChatGPT eases automation to the point those XKCD memes are less relevant than they used to be.
Automation (by XKCD)

Legend: "I spend a lot of time on this task. I should write a program automating it!"

via https://xkcd.com/1319/

is It Worth the Time? (by XKCD)

Legend: "How long can you work on making a routine task more efficient before you're spending more time than you save? (across five years)"

via https://xkcd.com/1205/

Extras

  • If you lose focus on the ChatGPT session in your browser, it's likely the calculation process will interrupt.
  • ChatGPT seems stuck at generating the output ? Refreshing the page might be enough, in other cases ChatGPT might automatically continue the generation or will show a button you can hit to force this action.
  • I've commented on this topic and this post at https://lobste.rs/s/7a3qhh/how_s_your_experience_so_far_using_llms_for#c_igssaj

Thank you πŸ™‚


Effective content curation

I went up with this workflow that seem to pass the test of time and I'm sharing it with you :

  • I'm using ChatGPT Β» Summarize & Chat extension for Brave/Chromium (browsers) in order to save time at scrolling long articles in my inbox but yet it's important to note it's shrinking content more than summarizing it.
  • I'm not subscribing to any newsletter, everything is read through MiniFlux (RSS curator. If you don't know about that, read -> what is RSS?), combined with RSS-Bridge and ChangeDetection and some tips, all self-hosted. For any newsletter that cannot be replaced with RSS feed, I rely on my hero https://kill-the-newsletter.com/. And if email has your preference over RSS, I recommend you BlogToTrottr to follow RSS/Atom in real-time by email.
  • I edit my Feed titles with emojis expressing how I feel about their interest : πŸ˜• (Boring?) πŸ˜ƒ (Joyful read) etc.
  • I categorize feeds and label them also with emojis and sort them from the best to the worst. Those visuals clues really help. When I see nothing interesting for a while in any RSS I've subscribed to, I remove it from Miniflux.
  • I'm customizing my RSS curation in Miniflux through some hacks.
  • I manage to keep a maximum of 20/30 RSS feeds of interest. Among those, I find maybe a dozen to be absolutely fantastic and I'm even sharing them in my /links section.
  • I've configured Tampermonkey browser extension to take control of the rendering of my RSS feeds list and replace the whole page with "FOCUS", at least 80% of time.
  • Something too long to read but that looks interesting is immediately dropped from Miniflux and shared/saved into Wallabag (bookmark management) for later read or in Shaarli. The goal is to declutter my subscriptions inbox.
  • Articles I browser randomly and are already saved in Wallabag are saved with tags like "x2" if it's second time I save them, "x3" if it's the third save I save them, etc.
  • If I find any image or PDF of interest, I save them locally to my Dropbox folder of interest. I have a folder for Books, which is subject to automatic triage with some scripting, also my ePubs are automatically converted to PDF. Duplicate files are moved to subfolders like "x2" if it's second time I save the same book, "x3" if it's third time, etc.
  • Everything in Wallabag is automatically labelled based on keywords. I don't waste anytime adding manual labels. If a label is missing, it means an automation is missing in my Wallabag auto-tagging rules.
  • I keep a maximum of 10 5 podcasts I'm subscribed to in Spotify. In the end I dream to only subscribe to a dozen of RSS feeds but that's probably very optimistic.
  • I want to take time for whatever is worth reading, and skip the rest fast.
  • Whatever is not worth is submitted to shrinking.

Nerds Against Clutter: My Digital Downsizing Diary

Declutter and letting go.

  • Dropping Discord, Diaspora, Daily.dev, maybe Pixelfed and Mastodon next (done, by March 2024). Too buggy, too noisy.
  • Kissed Google Keep goodbye and embraced Obsidian even more, thanks to the Importer plugin.
  • Trying to escape the WhatsApp surveillance state. I'm axing useless groups left and right.
  • Scrubbing my old web presence. It's like digital housekeeping.
  • Using Syncthing now. Real-time sync across devices without cloud middlemen. Dropbox, you're on notice.
  • Deploying FDUPES for disk decluttering – it's a duplicate file slaughterhouse. Throwing inotifywait into the mix for smart folder monitoring, because who likes manual mess management?
  • Cut down on RSS. Using Wallabag, Miniflux. and Shaarli more. Bookmarking tools still suck somehow and I can't see a better alternative (yet) for my needs... yuck. Looking at the market for knowledge and bookmarks management tools, there is room for improvement in how we manage and consume information. Most of the hard work is on you for years with tools that connect to information.
  • Harnessing RSS-Bridge and Miniflux for streamlined info feeds. Using Changedetection for the unRSS-able stuff, i.e to monitor some indexes, lists, legal terms, release pages.
  • My tab hoarding was legendary, now capped at 18 with Tab Limiter. Browser zen achieved.
  • I have been known by my colleagues and partner to keep too many tabs open. My nerves cracked at reading other folks suffering same issue. So I decided to close a number of them, and limit each Window to 18 tabs with Tab Limiter.

Exploring and creating.

  • Blogging's up, but it's a discipline game. Need to turn Obsidian hoarding into public wisdom. Notebooks over phones, knowledge over scrolling.
  • Taming my Brave extension zoo with Context. It's like a digital bouncer for my browser. Funny, now I've more UX/Privacy oriented extensions than tabs.
  • Eyeing Geeqie to outsmart duplicate photos. Even my pixels need to be minimal.
  • n8n (Zapier/IFTTT alternative) is my new digital butler, still a bit rough around the edges. Coding my own automation magic because their recipes are just appetizers for my needs. For the record I'm now using it to automate RSS feeds triage and automate the web archiving of some bookmarks as I feel archiving beats bookmarking.
  • Diving back into Rust. Cooking up something for productivity and knowledge management. Stay tuned.
  • And of course, some snowballs and video gaming to keep things balanced and fun.

To be continued.