Contabo is cheap for reasons

TLDR: In December 2023 I picked this low-cost VPS provider. I do not recommend them, and I hope this writing will save people from wasting any time or money on Contabo.

They claimed "German Quality, Always". It's neither good quality, neither always.

For the most part, from December 2023 to September 2024, I could almost live without caring too much about managing my server, whose setup is mostly delegated to Cloudron. And I had no problem using Cloudron for my web apps on this cheap VPS; my Cloudron backups were also configured in Cloudron to use Contabo Object Storage. No problem on the surface. Until Murphy's law enter the game:

  • An outage in September 2024 broke availability in Nuremberg Data center and so my backups and my Cloudron instance were unavailable. I entered panic mode because there was nothing I could see or do. Contabo offered no support because their customer portal was impacted as well and no fail-over mechanism had been put in place. That tells a lot. Fortunately this was solved after a few days.
  • Quickly after this outage resolution, I noticed too that rclone and Cloudron backups were acting weird, mostly due to the slowness or rate limiting of Contabo Object storage API. I couldn't find an explanation so I just contacted Contabo support, which fixed my problem without saying much, sadly.
  • Few weeks later still in September 2024, my DNS config on my VPS starts to cause issues, but only to the light of problems encountered in WordPress, Miniflux and Changedetection. All those issues exploded to my face in the form of timeouts or unreachable URLs. Fortunately I found this write up https://www.thomasmartens.eu/contabo-dns-lookup-timeouts/ and could just fix my default NetPlan configuration and switched from Contabo DNS to 1.1.1.1 (CloudFlare) and 9.9.9.9 (Quad9).
  • I noticed too that my object storage space was filling up constantly, causing me some anxiety. And I couldn't clean anything. The backup cleanups were also silently frequently in Cloudron as a result. In the end I couldn't find the root cause but switching to Hetzner storage instead of Contabo Object Storage fixed and made the whole process faster.

Unfortunately I went through those issues. But I could have avoided them. When one have to pick a product or service, it's better to do more research and ask around for advice.

Use ChatGPT to be brutally honest when giving you advice and saving you money and pain.

Or just visit forums and communities like Reddit;

Lessons learned

  • Some products and services are cheap for reasons: quality for free is often a scam.
  • I should have checked internet before. Be more picky, check the community forums and reviews. Avoid unneeded pain.
  • Do not put all your eggs in the same basket, i.e pick different locations and providers for your compute and data.
  • Monitor the health of your backups and storage location.
  • Plan for disaster.

Specs comparison

If you ever need to battle test your VPS, have a look at some of those VPS benchmarking scripts: https://monovm.com/blog/vps-benchmark-tools/.

I've picked the first from the list, namely bench.sh https://github.com/haydenjames/bench-scripts/blob/master/README.md#benchsh in order to compare the specs of my Gitea runner running on Hetzner vs my main Cloudron instance running at Contabo.

Here are the specs and price on paper:

Contabo VPS M SSD https://www.vpsbenchmarks.com/hosters/contabo/plans/m-ssd (deprecated plan)

  • CPU 6 cores
  • Memory 16 GB
  • Disk 600 GB (100% SSD)
  • Port 400 Mbit/s
  • Price €11.25 / month

Hetzner - CX32 https://www.vpsbenchmarks.com/hosters/hetzner/plans/cx32 (still available on Nov 13, 2024).

Both are in the same region. The amount of CPU and disk space vary, but overall there is a clear winner about performance.

-------------------- A Bench.sh Script By Teddysun -------------------
 Version            : v2024-11-11
 Usage              : wget -qO- bench.sh | bash
----------------------------------------------------------------------
 CPU Model          : AMD EPYC Processor (with IBPB)
 CPU Cores          : 6 @ 2799.996 MHz
 CPU Cache          : 512 KB
 AES-NI             : ✓ Enabled
 VM-x/AMD-V         : ✗ Disabled
 Total Disk         : 593.5 GB (53.7 GB Used)
 Total Mem          : 15.6 GB (4.9 GB Used)
 Total Swap         : 4.0 GB (268.2 MB Used)
 System uptime      : 6 days, 17 hour 31 min
 Load average       : 0.67, 0.85, 1.09
 OS                 : Ubuntu 22.04.3 LTS
 Arch               : x86_64 (64 Bit)
 Kernel             : 5.15.0-124-generic
 TCP CC             : cubic
 Virtualization     : Dedicated
 IPv4/IPv6          : ✓ Online / ✗ Offline
 Organization       : AS51167 Contabo GmbH
 Location           : Nürnberg / DE
 Region             : Bavaria
----------------------------------------------------------------------
 I/O Speed(1st run) : 131 MB/s
 I/O Speed(2nd run) : 98.0 MB/s
 I/O Speed(3rd run) : 94.4 MB/s
 I/O Speed(average) : 107.8 MB/s
----------------------------------------------------------------------
 Node Name        Upload Speed      Download Speed      Latency     
 Speedtest.net    399.41 Mbps       399.48 Mbps         11.37 ms    
 Los Angeles, US  378.05 Mbps       401.30 Mbps         155.48 ms   
 Dallas, US       390.21 Mbps       402.25 Mbps         127.98 ms   
 Montreal, CA     44.98 Mbps        400.75 Mbps         105.08 ms   
 Paris, FR        404.24 Mbps       397.38 Mbps         18.16 ms    
 Amsterdam, NL    392.33 Mbps       396.11 Mbps         23.08 ms    
 Beijing, CN      275.61 Mbps       410.68 Mbps         241.41 ms   
 Shanghai, CN     129.99 Mbps       362.16 Mbps         428.46 ms   
 Hong Kong, CN    224.22 Mbps       386.99 Mbps         284.37 ms   
 Singapore, SG    93.82 Mbps        362.53 Mbps         329.19 ms   
 Tokyo, JP        302.55 Mbps       393.36 Mbps         256.67 ms   
----------------------------------------------------------------------
 Finished in        : 5 min 43 sec
 Timestamp          : 2024-11-13 09:18:58 UTC
----------------------------------------------------------------------
-------------------- A Bench.sh Script By Teddysun -------------------
 Version            : v2024-11-11
 Usage              : wget -qO- bench.sh | bash
----------------------------------------------------------------------
 CPU Model          : Intel Xeon Processor (Skylake, IBRS, no TSX)
 CPU Cores          : 4 @ 2099.998 MHz
 CPU Cache          : 16384 KB
 AES-NI             : ✓ Enabled
 VM-x/AMD-V         : ✗ Disabled
 Total Disk         : 75.0 GB (3.5 GB Used)
 Total Mem          : 7.6 GB (632.2 MB Used)
 System uptime      : 39 days, 14 hour 22 min
 Load average       : 0.03, 0.02, 0.00
 OS                 : Ubuntu 24.04 LTS
 Arch               : x86_64 (64 Bit)
 Kernel             : 6.8.0-41-generic
 TCP CC             : cubic
 Virtualization     : Dedicated
 IPv4/IPv6          : ✓ Online / ✓ Online
 Organization       : AS24940 Hetzner Online GmbH
 Location           : Nürnberg / DE
 Region             : Bavaria
----------------------------------------------------------------------
 I/O Speed(1st run) : 846 MB/s
 I/O Speed(2nd run) : 795 MB/s
 I/O Speed(3rd run) : 929 MB/s
 I/O Speed(average) : 856.7 MB/s
----------------------------------------------------------------------
 Node Name        Upload Speed      Download Speed      Latency     
 Speedtest.net    2292.48 Mbps      2473.56 Mbps        0.66 ms     
 Los Angeles, US  605.99 Mbps       2986.01 Mbps        157.65 ms   
 Dallas, US       702.97 Mbps       3420.20 Mbps        128.18 ms   
 Montreal, CA     175.26 Mbps       937.60 Mbps         90.89 ms    
 Paris, FR        7119.88 Mbps      9681.24 Mbps        13.65 ms    
 Amsterdam, NL    4993.84 Mbps      7353.62 Mbps        10.41 ms    
 Beijing, CN      627.18 Mbps       2611.88 Mbps        187.03 ms   
 Shanghai, CN     1.34 Mbps         1791.62 Mbps        325.14 ms   
 Hong Kong, CN    352.36 Mbps       2526.89 Mbps        234.43 ms   
 Singapore, SG    181.44 Mbps       670.17 Mbps         194.45 ms   
 Tokyo, JP        313.71 Mbps       2686.38 Mbps        236.80 ms   
----------------------------------------------------------------------
 Finished in        : 5 min 48 sec
 Timestamp          : 2024-11-13 09:25:24 UTC
----------------------------------------------------------------------

Conclusion and what's next

I still have my main VPS at Contabo, however my storage and backups are with Hetzner, migrating to Hetzner should be a no brainer, thanks to the built-in quality of Cloudron.

I've also configured another VPS with Hetzner, with satisfaction, for running Gitea Actions, because there is no way I rely on GitHub.

So you know. Move away from them, unless for cheap, non business critical workflows where performance and reliability are not a concern.

Typing fast is not so important

In response to https://www.rugu.dev/en/blog/on-typing-fast.

HN comments.

What to say on this ? I like typing and feeling productive. But the two are unrelated.

TLDR; We need to slow down a bit, as programmers, and as technologists.

I believe there are virtues to typing fast, for the impatient programmer, for the prolific coder, for the over busy person, and because free time is never given to anyone. I tend to be quite impatient at times, because I see fast builds and fast code as solved problems, at least with boring tech.

I nurture other virtues like removing code, remove dependencies, pipeline steps, YAML soup, hundred lines setup instructions and docs, frameworks and libraries. You can shift to simple scripts, binaries and code, that is efficient and -- most important -- that your understand, can maintain and validate and which can fit in your screen without needing to scroll through long files nor switching between tabs and editors to understand it.

There are benefits to pushing releases faster, but it has nothing to do with typing more code faster. Do not push buggy code at first. Understand the code you write and make it boring, readable, concise, because you understand it an you know enough to improve it and vanish some complicated concepts.

Because you focus on the user experience, on quality, on the business, not on the code, you will be able to explain the business that it's more important to do the right thing than having to rewrite the things three times later and debug it four times. You know how to defend your case because and you know the impact of a badly designed solution built too fast. You already prefer quality code, and you know enough about the business to explain how risky it is to push bugs faster and have to spend all your valuable time debugging your code or peers code.

Do I enjoy more YAML and code on a daily basis ? Certainly not. Duplicated code and YAML makes me puke. I'm happy if a colleague thrashes some code I wrote, because in the end we do not need to keep it all. An a good CI pipeline should likely be 2 lines long, not thousands lines long (hello GitLab/GitHub and other YAML soup lobbies). More code is more fragile and harder to validate and rework. Good code takes lot of rework, or maybe more thoughtful upfront design.

Also, do you really want to type the whole day ? Is it your only way to solve problems ? Some things are not so urgent nor important.

  • A single line of code added or removed can make a whole day better. A simple "NO" can avoid tons of bugs.
  • Reading code, learning about boring and legacy tech, will teach you valuable lessons.
  • Discussing decisions and designs with peers, and even in the middle of other things, will lead you to connect more dots together, and lead to opportunities and more collaborative ways to solve the problems.
  • Reinventing the wheel, simplifying solutions, solving problems, thinking, writing, do not require typing faster.
  • Programming do not require typing faster.
  • Authoring good books and novels do not require typing fast.
  • The best work is not always the result of fast work. A good movie/tv show script, a good (comic) book, a good music release, requires thoughtful work.
  • Maybe feeling emotionally pressured to react/rant on internet or social media leads you to type faster.
  • Maybe a bad manager wants you to answer faster to what appears to be urgent/important. But maybe you have to reconsider the urgency.
  • Maybe we do not need to reach 120% productivity, nor 600%, nor 100%. We are not machines.

We do not need more code/bugs in the world, nor more pressure to deliver more in fewer time.

We need more quality and care. We need to take the time to think and discuss code. And sometimes consider doing nothing and maybe waiting for some technology to mature rather than succumbing the hype... hello LLMs and their carbon footprint that ruin long efforts, while a few good IFs could have done the work.

I tend to see myself as a slow programmer, I mean thoughtful one. For a decade at least. And I believe this is rarely valued. Yet an important virtue for me is to take the time.

And reconsider our ways for solving problems rather than trying to convince the world to speed up a little more. We already suffer with enough crap to fix, alerts, dark patterns, slow code, bugs, security incidents, broken tests, broken pipelines, fix(ci) commits everywhere, git conflicts, ignored specs, overlooked docs...

Do we want more code delivered by heroes (or glorified code addicts) through harder and faster work ? Do we want your value determined by your typing speed or the count of hours spent at work ?

Do I really want to pressure peers and myself even more by typing always faster and more, energized with caffeine (which I love too much)? No thanks.

I'm not a luddite, nor for status quo, yet... less is a viable option.

  • Maybe the problem is not so urgent.
  • Maybe this feature will make everyone even more busy with the bugs and side effects and consultancy.
  • Maybe this extra feature or dependency will introduce too much risks and maintenance.
  • Maybe teaching you how to reach the same goal with this alternative way is as efficient.
  • Maybe this work can be avoided with a phone call, a chit-chat or a boring old idea or tool.
  • Maybe you do not need to be a code hero, especially as code is expensive.
  • Maybe you can solve the problem without a computer.
  • Maybe you need something simpler.
  • Maybe your problem was already solved many times before. Do not repeat the cycle or history.
  • Maybe we deserve a more disconnected world, long live local and offline first.
  • Maybe I want a world where I'm not urged to switch apps and copy paste the MFA within a 30 seconds time frame.
  • Maybe I can cultivate patience by watching someone type slowly, and it's fine.

Or if you insist, let's do it the right way, and think of a minimalist, solid and boring way to do it.

Related: Silicon Valley S01E05 - Typing speed

Markdown: a text adventure

Seriously I wonder what is wrong with us, computer scientists and computer hobbyists. I thought I loved markdown, that I needed to keep telling the world about it, but what renders in Gitea is not rendering the same in GitHub, nor in Obsidian. I'm likely idiot, let's find out.

The guide at https://www.markdownguide.org/book/ already starts by confusing its readers, looking for simplicity, with prompting the user to pick between normal or extended syntax. The top menu also mentions hacks and tools and books. Woot ? Aren't we talking about a simple text format and about making things simpler ?

This can't be so complicated, at least I thought. Then I visited https://en.wikipedia.org/wiki/Markdown#Implementations and fallen of my chair -- note the >dramatic< tone here, but I'm only sitting in my sofa and avoiding sleep, I'm all fine.

I hope we are solving this problem. Likely, the smart people figured it out... < o> < o> https://hn.algolia.com/?dateRange=all&page=0&prefix=true&query=variant+markdown&sort=byDate&type=comment. Woot.

Damn, even on Markdown supposed to provide a simple and better source format and publishing tool than HTML, we experts can't agree. Myriad of tools and implementations each extended by a few more artisanal tools here and there, millions of hours wasted ?

They are plenty static site generators and people crafting them and enriching markdown with details to hopefully generate, well, mostly valid HTML ? Or not ? Last time I checked, only an handful of those would take care of this goal.

We could chose to go back to working with text or HTML without tools in our way. This blog post for instance is almost just text and links, nothing much I require from markdown. This likely makes this portable. No transpiling needed, no tools.

It's NOT SO HARD and still readable. This blog post also has links that do not require remembering keyboard shortcuts on Mac for [Title](...) nor combining any special key.

Markdown is like sharing a recipe, but everyone reinvents a different complicated meal.

Simple is however better.

Default settings for watches in Changedetection

I'm addicted to Changedetection for spying on website changes and internet search results for specific keywords, Occasionally also for monitoring price changes. It's quite handy to discover new links added to web directories, or stay updated with some websites that do not provide any RSS feed.

Context

  • I'm watching hundred of URLs.
  • I often spy on webrings and blogrolls to discover new interesting links, and also on search engines results for specific keywords.
  • I'm self-hosting Changedetection through Cloudron.
  • I'm mostly following through those watches via my RSS Reader, Miniflux.
  • For some specific changes, like weather bad conditions, I subscribe via ntfy.

Anyway, I've developed a few habits that fit my workflow so well for every new watch, which are:

Settings > General

This is where we set defaults for all future watches, it's pretty obvious you must start here. Here is my current setting:

  • Time between check: By forcing a convenient interval between checks, you try to find a balance between information overload and staying current. Pick your poison, but don't hesitate to override this setting at per-watch level.
  • Extract from document and use as watch title: it's convenient to let Changedetection take care of naming your watches based on the webpage titles rather than leaving the sometimes very long and non human-friendly URL as a default description.
  • Random jitter: this is handy to avoid stressing your I/O too much.

General > Group tag

This one is mostly for better organizing stuff, as I mentioned I follow those changes through RSS, I noticed it was harder to distinct between important and less important stuff because I was following the default RSS feed, but Changedetection provides distinct RSS feeds per groups/tabs of watches, and that's my preferred workflow now.

I'm trying to always set a label, I have around 15 in total, some for specific interests (privacy, discovery aka list of links, devops, music, ...) or specific people, locations and business updates. The rest is generally less important and is labelled with things like FOMO, misc, ...

Example of group tag.

Those group tags appear as labels next to the URLs you are watching.

Example of labelled URLs.

If you want to watch a whole group through RSS, link is at the bottom right of the page on the group tab.

Filters & Triggers > Remove elements

It's common on bloated rich web pages to want to focus on specific parts, like everything between <header> and <footer> sections, so I sometimes have to add footer and header. It's mostly needed for sites like eBay, 2ememain, where we can buy and sell things.

Remove HTML elements.

Filters & Triggers > default filter and triggers

my Text filtering defaults in Changedetection.

This is purely for spam reduction as I mostly want to know when something new is made.

Sometimes I also enable Sort text alphabetically depending how the page is updated by its author.

🆕 Those new settings have been added recently and I'm also enabling them on new watches:

Extension

Try the web browser extension for Chromium based browsers, it makes watches one-click away.

Next

I've opened a discussion in Changedetection's repository to talk about how repetitive it feels to me, in the hope we can see something like template settings be proposed in the future, at least for the filters & triggers which I consider is not too hard to start with.

Minifux scraper rules

I'm following Joy of Tech comic via RSS in Miniflux but the image was never loading.

I found half a solution on this blog post of Jan-Lukas Else, unfortunately the proposed solution fails probably as a consequence of some changes in the format of Joy of Tech pages.

The fix is quite simple actually. Edit the feed settings, set the scraper rules to the following:

p.Maintext > img[src$=".png"]

And of course enable "Fetch original content" in the feed options.

And voilà, simple and beautiful.