Yesterday I discovered that I had leaked 3,638 email addresses by uploading them to a public GitHub repository.
Here's what happened, how it happened, what I did about it, and what I'm doing to make sure it can't happen again.
TL;DR
This post will be long, so here's the essential stuff if that's all you care about.
Yesterday morning (2026-03-03 06:00 -- all times in this post are UTC) I was alerted by email and private message that someone's unique email address, that they had only ever used on my services, had received a phishing email purporting to be from PayPal.
There's only one way for this to be possible: it meant that I had allowed the address to be stolen. I acknowledged this with a blog post at 06:46, requesting that people let me know if they were affected.
Through the day I received, or could infer from my own data, 17 addresses that either did, or did not, receive the phishing email.
Analysis of these 17 data points led me to the realisation that I had uploaded a folder full of text files containing email addresses to a public GitHub repository. This made them visible to anyone who cared to go looking. I confirmed this theory at 18:00 and immediately made the repository private. I acknowledged this with a blog post at 18:18.
The first upload of 1,650 addresses occurred on 2024-10-08. Further updates were uploaded through 2025. The final number of addresses in the repository was 3,638. Because I don't know when the data was scraped from GitHub, I can't be sure which of those addresses was harvested. My assumption must be that they all were.
I am deeply sorry. I strive every day to be an exemplar of 'the good Internet'. In this instance, I have failed you quite miserably.
How to determine if your address was leaked
Immediately after this post has been published I will email the 3,638 addresses that were leaked.
Subject: Important: your email address was exposed [D25.14.44]
If you receive this email then your address was leaked.
You can also check if you received the phishing email:
From: PayPal Security <no.con**star@ca**a.cl>
At: around 2026-03-03 05:30 UTC
Subject: Please confirm your identity
If you receive this email then your address was leaked. (Check your spam folder: many providers correctly identified it as a phishing attempt.)
If you do not receive either of these emails then your address was not leaked.
Was any other data leaked?
Like names, addresses, dates of birth, passwords, or access to accounts?
No. The only data leaked was a list of email addresses in a text file.
What's the impact of the leak?
The email address that was leaked will receive spam and phishing attempts.
To be clear, your account has not been compromised. Only your email address has been made public. You do not need to change your password (assuming it is already unique -- see What can you do? at the end of this post).
Here's why I had a folder full of your email addresses, and how they ended up on GitHub.
Over the years I've used a variety of 3rd party platforms to deliver websites and services, as well as hosting a public email list.
Buttondown
Used to host the mailing list from 2019–2024.
Allowed users to opt-in to email marketing.
Users hold an account of sorts; while you don't have a password, the platform records your email address.
Netlify
Used to host johnnydecimal.com and jdhq.johnnydecimal.com from 2019–current.
Users do not hold an account at Netlify: it's an infrastructure service only.
Gumroad
Used to sell the Workbook from 2023–2024.
Allowed users to opt-in to email marketing.
Users held an account at Gumroad.
Thinkific
Used to sell various products from 2023–2025.
Users held an account at Thinkific.
Shopify
Used to sell various products from 2024–2025.
Allowed users to opt-in to email marketing.
Users held an account at Shopify.
Stripe
Used to process payments from 2024–current.
Users hold an account of sorts at Stripe: you can't log in, but they hold a record of transactions linked to your email address.
PayPal
Used to process payments from 2024–current.
Users may hold an account at PayPal.
It's also possible to check-out as a guest, but in any case your email address is recorded.
Listmonk
Used to host the mailing list from 2025–current.
Allows users to opt-in to email marketing.
Users hold an account of sorts; while you don't have a password, the platform records your email address.
The link at the bottom of every email allows you to unsubscribe (in which case your email address is retained) or delete your data entirely.
PikaPods
Used to host Listmonk from 2025–current.
Users do not hold an account at PikaPods: it's an infrastructure service only.
Amazon Simple Email Service (SES)
Used to send emails from 2025–current.
Users do not hold an account at Amazon: it's an infrastructure service only.
Clerk
Used to host JDHQ user accounts from 2025–current.
Users hold an account at Clerk.
If this sounds like a nightmare, it's because it is. But this is the reality of running a small online business: you stitch together that business based on the services available to you at the time. This is a factor of features, cost, and experience. As your business grows, you move between platforms.
To be explicit: none of this is the fault of any of these platforms. I list them all merely to demonstrate the range of stuff one has to deal with.
The reality of running this type of business is that you spend a lot of time consolidating user data from these services. Why is this person in this data set and are they the same person as this person in that data set? You do this so you can mail the right people the right information; send subsets of people offers that are only relevant to them; migrate user accounts from old platforms to new platforms.
As a small business with limited resources there's only one practical way to do this analysis: Excel. In a bad month I might spend 50% of my time 'mashing' data like this. I hated having to do it, and eliminating it was a very strong driver in my building JDHQ: because now you have a single account, forever. No more mashing of CSV files exported from half a dozen platforms.
(Ironically, this leak is partially a result of my very, very strong desire to never send anyone an email that they don't want. I've spent dozens of hours over the last few years meticulously poring over and cross-referencing this data, taking pains to ensure that someone who opted out on this platform didn't get opted back in on that platform. Had I not bothered, some of this data might not have been on my laptop. C'est la vie.)
So we have a folder full of CSV files
That's where we are in the story: I have a folder full of CSV files from these various platforms that I've been using to consolidate and migrate accounts, and to send email directly from my laptop via Amazon SES.
So how did they end up on GitHub? Sheer stupidity.
This repository started on 2024-01-10 as an intentionally-public archive of emails sent to the mailing list. It contained 3× text files.
The problem started on 2024-10-08 when I started to use the same folder to store the output from these CSV files.1 Forgetting that the linked repository was public, I committed these files and 'pushed' them to GitHub. At this point they were available publicly.
Why this git/GitHub thing?
It's natural to use git -- which is software independent of GitHub, the website -- to manage a dataset like this. It gives you version control, which is really useful. If you mess up, you can just 'roll back' to a previous state.
Using git locally isn't the problem. Not realising that the linked GitHub repository is public and pushing to it was the fatal mistake. I'll address this below.
From 2024-10-08, this folder, which lives at ~/dev/amazon-ses on my laptop, continued to be used as a place where I stored lists of email addresses so that I could send email via Amazon SES. Every time I did that, I committed the changes and pushed them to GitHub. And so by yesterday, the repository held 3,638 unique email addresses.
My own password hygiene
So that I don't need to make the point below with regards to each specific service, I'll make it here.
I have exclusively used 1Password for password management since 2009. All of my passwords are unique and they exceed all modern standards for entropy. Where 2FA is an option I always enable it.
It was, at least, nice to know from the start that this leak wasn't the result of bad password management.
My 'secret key' hygiene
Separate from passwords, developers of sites like mine have 'secret keys' that are used by servers and services to talk to each other. From JDHQ, for example, you can opt-in to product notification emails. This is possible because the service that serves JDHQ, Netlify, has the secret key for Listmonk and can talk to it directly via the software I've written. If you have the secret key for a service, you can read all of the data contained therein.
I was already planning an article detailing the steps I take to secure these keys, so I'll just note here that they're also all stored in 1Password, and that I had already taken what I believe to be extraordinary lengths to secure them against attack. More to follow.
Listmonk and PikaPods
I 'self-host' my mailing list using Listmonk hosted on a PikaPod.2 At 06:31 I identified that the Postgres instance used by Listmonk was open for login using a public console. This isn't enabled by default: I had turned it on a few months earlier so that I could connect directly to the database using client software on my Mac.
Forgetting to disable that access was definitely a mistake -- see action 4, below -- but the risk seemed low. The username and password are set by PikaPods and both were secure: the username not being admin or similar, and the password being a 24-character string. Checks of the console software, Adminer 5.4.2, showed no vulnerabilities.
For these reasons, I dismissed the possibility of a leak via this route. Because I was so sure that this wasn't the cause, I didn't email PikaPods until 15:36. Finally doing so, I asked them if there were any logs for this console endpoint that might be useful.
Their reply just 45 minutes later was stunningly helpful, clearly written by a caring human. I could not have recommended them more before this happened, and yet here we are, my endorsement stronger than ever. A superb service, utterly without fault.
At this point (16:13) I hadn't positively eliminated this as the cause, but it seemed vanishingly unlikely.
Buttondown
Early data supported a theory that Buttondown's service had been compromised. As a reminder, they hosted my mailing list from 2019–2024. So it's not unusual that there was a 1:1 match between leaked email addresses. I mailed their support at 08:57 emphasising:
"This is NOT an accusation -- I'm trying to figure this out myself. It's purely FYI, and I genuinely hope for you that I'm wrong."
– thinking that, if a leak had occurred, their support might appreciate the data.
Through the day I continued to receive data points, none of which disproved this idea. This analysis was basic science at work: build a hypothesis, collect data, see if data supports hypothesis.
Of course good science is about a falsifiable hypothesis, and at 15:31 I was made aware of 3× addresses that had received the phishing email that were not in my Buttondown exports. It was a relief to rule them out as a cause and I notified them immediately.
Again, I can't sing the praises of a company enough. Anita from Buttondown support felt like a friend yesterday, keeping in touch even after I informed them of this finding. If you need an email newsletter, use Buttondown. They're truly good people.
How I came to the answer
Around 16:30 we went for our usual end-of-the-day walk. Still not knowing the cause, I replayed all of this to Lucy. Thoughtful questions followed and, via this conversation, the thought occurred to me. Back to science, this is Occam's Razor in a nutshell: given all possibilities, the simplest should be considered the most likely.3
The simplest possibility being that I, as a holder of all of this data on my laptop, committed it to a public repository. Getting home, this was confirmed at 18:00.
A note on the data sources involved
You might be on this list despite having never signed up for my mailing list. For example, hundreds of email addresses from JDHQ members are impacted. Those addresses aren't on the public list, because I never sign you up without your explicit opt-in.
These addresses are in the data because I used the leaked scripts and Amazon SES to send transactional emails as well as marketing emails to the public list. Similarly, addresses from previous platforms may be present.
Lessons learned
Looking out of the window earlier this week I saw a fire engine, sirens blaring, scream up behind a learner driver. Should be part of your driving test, I thought. Because you can't truly be prepared for the panic induced by sirens a metre behind you until it happens.
In a way, I'm glad this happened. Without trying to minimise the event, it's fair to say that on the spectrum of security and data leaks, this is about as benign as it gets. Better that this happens now when I can learn from it, than a much worse event happens later.
Because if this hadn't happened, I would have spent the day developing JDHQ, where you'll soon have the ability to store your own notes, and create your own IDs.4 It is impossible to convey the depth of responsibility I feel for this data. I lie awake at night thinking about how to keep it secure. (An enjoyable problem to solve, to be clear.)
Like I said above, I've already been planning a post addressing this data and the measures I'll take to protect it. So here let's just talk about what I learned yesterday.
If you don't have it you can't lose it
I can't stress enough that I do not want your personal data. Having it means having responsibility for it: and then look what happens.
You may notice that I don't ask for your name when you sign up for JDHQ. Requiring a name is an option at Clerk, where your user account is held. I turned it off. You may notice that I don't ask for your address when you make a purchase. This is an option at Stripe, who processes your payment. I turned it off.
Still, in diagnosing this issue yesterday I realised how much data I have on this laptop. Again -- see above -- this is largely unavoidable. I run a business, I need to manage the business' data. For example, every quarter I need to download my transactions for tax reporting.
But I can definitely change my behaviour.
Action 1: make customer data more difficult to access
You might think I'd be safer just deleting it, but this data proved very useful in troubleshooting yesterday's issue. Without it I'd have been blind. So I'd rather not delete it; I don't think there's a problem that this would solve.
The problem to solve is that having this stuff sitting in folders on my laptop is too loose.5 It's too easy for something to leak; too easy for me to think of these files in the same way that I think of all my other files. Instead, I need to be acutely aware that when I'm interacting with them that I'm in some special place. I need to be on high alert.
I have created an APFS encrypted disk image and moved all existing customer data to it. It is mounted on-demand. The password is in 1Password and requires a manual copy/paste: I won't store it in my Keychain, so the disk can't ever mount without my explicit action.
Mounting it will forever recall this incident, and I'll be vigilant. I'll do what I need to do and unmount it. There's no chance that I'll make the mistake of pushing something in there up to the cloud.
Action 2: only download what I need
When I download data from these platforms -- say that quarterly tax analysis from Stripe -- my tendency has been to be lazy, and grab everything. As in, choose all the columns, download everything offered.
Because if you're not exactly sure what you need, it's more convenient to have everything to hand than to have to go back and get what you missed.
From today, I'll only download the specific data that I need to do the job. For Stripe's tax analysis, that might be as minimal as the transaction ID, amounts, and the country of purchase. Your name and email address isn't a factor in my reporting a quarterly sales tax total to the Australian Tax Office -- so why even request that in the export?
Action 3: Proactively delete accounts
I still had a Buttondown account that I wasn't using. It still contained thousands of email addresses.
It, and its associated data, has been deleted. If I think of any other similar accounts, I'll delete them.
Action 4: reminders to disable open services
While it proved not to be relevant, I was disappointed to find that I'd left the Listmonk Postgres database console access enabled. This was an unnecessary risk and happened because I simply forgot to disable it when I was finished.
In the future, if I open up anything like this -- which, again, is going to be necessary to run a business -- I'll set a timed reminder for myself to close it when finished.
Action 5: GitHub is not a backup service
I have a tendency to think of GitHub as a useful backup service. Do a quick git push and now the precious data that I just spent all day massaging into shape is copied to the cloud.
This isn't what GitHub is for! Never again, for any data.
Action 6: Compliance/reporting
Thanks to my Discord for flagging the possibility that I might need to register this breach with various national authorities.
I've investigated a few, and this event is below the reporting threshold. GDPR says that I need to keep internal breach logs and learn from the event, which I was already doing. If you're aware of a more strict requirement from your local authority, let me know and I'll gladly comply. Obviously I can't check them all.
What can you do?
You, as in the reader. What can you do to make yourself more safe online?
Lucy and I have been talking for at least a year about producing a (free) video series addressing the basics of online hygiene. We've moved that idea to the top of the list and will start working on it immediately.
I need to get this post published so I won't go into details here, but the two things you can do to put yourself above 99% of everyone else are:
Use a password manager. Allow it to generate random passwords for you, different for every site. This isn't optional in 2026.6
Use unique email addresses for each service that you sign up for. This is more difficult, and introduces complexity.
If you run a small business and need help: we are here. This stuff is difficult: I messed up and I'm supposed to be an expert. Please ask and we will help you.
End of incident review.
100% human. 0% AI. Always.
Footnotes
More accurately, the leaked data took the form of lists of email addresses in a .sh script, which called the aws sesv2 command to send an email. For the sake of simplicity I'll continue to refer to 'CSV files' in the main story; the data is identical, the only difference being a file extension. ↩
Acknowledging that this is a stretch of the definition of 'self-hosting'; the point being that PikaPods provides me the raw instance, and that configuration and management of this instance is my responsibility. They sit between PaaS and SaaS. ↩
I know that's a common mis-reading of Occam's Razor, which actually states that the theorem that introduces the fewest new elements should be considered the most likely. Close enough. ↩
One person has already written me asking why I think storing user data in JDHQ is a good idea. This isn't the post to prosecute that question but briefly: I'm building features that I want. I think you'll find them useful too. If you never want to use them … just don't. ↩
Noting that I don't consider the laptop itself to be an attack vector. It requires my password immediately after being locked, has FileVault full-disk encryption, and has no open Internet ports. ↩
Coincidentally, a friend of Lucy's was hacked last week. They got her Microsoft OneDrive account and everything in it. She's in the process of getting new passports and drivers licences. A true nightmare. Why? Shared passwords. Your password must be unique. ↩
Further to my discovery of the data leak earlier today, I spent most of the day trying to figure out what had happened. Finally, on a walk with Lucy after we knocked off, the lightbulb went off.
I didn't, did I? Surely not.
I did.
I uploaded a .csv containing your email address to a public GitHub repository. (For the non-technical of you: that's just a web page. I published your email address on a web page.)
Good news: not a 'hack' in the traditional sense. Just a dumb mistake.
Bad news: oof. Johnny. So bad.
To you, whose address is now public, I can only offer my humblest and sincerest apology. Unless you know me personally you won't know what this means to me.
Full report to follow
I'll write this up in detail tomorrow. There's a lot to learn.
I haven't mailed everyone affected yet because there hasn't been much to say. I'll send a link to tomorrow's post once it's up.
This morning I received an email from someone whose unique johnnydecimal@example.com address has received a PayPal phishing email. My personal address received the same email.
The email is from no.con**star@ca**a.cl. Its subject is Please confirm your identity.
I am investigating. If you received this email, please forward it to me.
We're going to try to post much more regularly on YouTube this year. At the end of each month I'll post a link to each of our videos.
(It does help us if you 'subscribe' to the channel, in YouTube, even if you never actually go there. I promise I'll never become a like and subscribe! bro.)
There's a lot of beginner-introductory stuff this month, and the start of a Small Business series that we'll continue through the year.
YouTube is a hot mess, unfortunately
On testing this page, I notice that YouTube -- the largest video player on the Internet owned by one of the richest companies -- doesn't actually work. As in, if you're unlucky 👋🏼 you'll be asked to 'sign in to prove you're not a bot' despite being signed in, and there being no button to enable you to sign in.
I've put a link below each of the videos, and longer-term we'll get these loaded up to JDHQ where you can watch ad-free without any of this nonsense.
My apologies. Not my platform.
What filesystem problems am I trying to fix? (2026-02-03)
This desire has a few roots. I'd like to be as close to the canonical ideal of Johnny.Decimal -- the persona, not the man -- as possible. Every time I do something in my business and I'm not doing it JDex-first, a fairy loses its wings decimal loses some precision.
This isn't some idealistic dream. Things are worse when I forget to update my JDex. I run a business with a whole bunch of complexity and I do it from a 13" laptop from a hotel room desk. So when I forget to update some detail about how my mailing list is configured, say, then that bites me later. It's lost time, more work, and a frustration because I have the tools. I have no excuse. I invented the tools!
I am also prone to jumping between tasks during the day. This isn't good for an ADHD-ish brain, it isn't good for productivity, and it's exhausting.
So that's the goal. Picture my work habits as a colony of ants, each holding some morsel of goodness, dashing around seemingly at random, every tempting scent an invitation to turn off the current path.
I'd like to work more like an old dog heading towards its bowl. Plod towards the bowl. Eat what's in the bowl. Maybe lie down a moment. Good dog.
Figure 22.00.0183A. Things, as small as a window can be made.
And I've already got a JDex with an entry for everything I do. I've been playing with this idea of a 'work log' for a while, so I'm going to combine these two techniques along with timers and some checkpoints through the day.
None of this is rocket science. I just need to do it and form some good habits.
Work log
I still haven't found a good tool for this, so it's something I'm going to build into JDHQ. But for now here's what I'm doing in Obsidian, which runs my JDex for the business.
The goal is to 'check-in' and 'check-out' of IDs as I work on them. This keeps me on track; allows me to log a quick line or two about what I did; and provides visibility of everything I touched at the end of the day/week/month.
I'll outline it here but don't worry about the details. I'm still tweaking this and will do another post later.
I have a note up in my system standard zero, 00.15 Work logs, by the day. It serves purely as a target for backlinks, because in each entry that I touch through the day, I have a header Work logs and under there I create entries that look like this:
- [[00.15 Work logs, by the day#2026-02-26|2026-02-26 09:04]]
- Checked in.
There's a bit of Markdown magic going on there.1 Again, details for a future post. How this looks in note 00.15 is what matters.
Figure 22.00.0183B. Backlinks at 00.15.
I've filtered the backlinks on today's date, and there's what I'm doing now, and what I've done so far. Nice.
Statement of intent
I'm telling you all that I'm doing this so that I keep doing it. I'll report my findings each week, and summarise what I've been working on.
100% human. 0% AI. Always.
Footnotes
Also I don't type this out every time. A Raycast snippet bound to ;;worklog expands it for me: [[00.15 Work logs, by the day#{date format="yyyy-MM-dd"}|{date format="yyyy-MM-dd hh:mm"}]]. ↩
This post is a deep-dive into two aspects of your JDex: whether or not you use it to store data, and the different ways you can store the JDex itself. This article is long, and isn't a tutorial; this is reference material that I can now link to when common questions arise.
…the revelation was realising that my filesystem, (long-form) notes folder, physical filing cabinet etc were the bookshelves in a library, and my JDex was the index card drawer. One (the card index) describes the shelves and what's on them, the other (the shelves) have the content.
That's brilliant. Let's make it really explicit. Put yourself in your local city library, some time in the 1960s. 🕺🏼
Index cards
Before computers, libraries kept records of books on index cards.
Figure 22.00.0182A. The good old days.
Crucially, you can think of the index card as a representation of the book itself. If a card doesn't exist, the book might as well not exist. To find a book in a library you don't start at a bookshelf. You start with the index of books. Here's the process.
Go to the index.
Using the structure of the index (Dewey, in the case of your library) find the book in question.
The index card will tell you where the book is physically located.
Assuming the book is in the building -- it might be out on loan -- walk to the location indicated on the card and get the book.1
Your JDex
Your Johnny.Decimal index (JDex) was designed to work exactly the same way. Your Johnny.Decimal IDs are the index cards. Each ID belongs to a category, which is analogous to the drawer. And -- not shown here because the diagrams are busy enough already -- we understand that each category belongs to an area just like each drawer is housed in a cabinet.
Figure 22.00.0182B. Categories hold IDs.
This is why I say the JDex is the most important thing in your system. A library without a catalogue is no longer a library: it's just a room full of books. Your life without a JDex is no longer a Johnny.Decimal system: it's just a bunch of files (even if they are neatly organised).
Metadata
Metadata is 'data that defines and describes the characteristics of other data' (Wikipedia). So if data is a book's contents, metadata is its publication date, ISBN number, current location, and so on.
We use this all the time without being conscious of it: the last modified time of a file is metadata. The fact that some piece of knowledge is in your email is metadata. (The knowledge itself being data.)
The thing about metadata is that it's typically tiny in comparison to the thing that it represents. As a result, we don't have to find some new place to store it. We already have one: the index entry.
For the library, this means writing this stuff directly on the index card. For your JDex, it means that your IDs' metadata is always stored directly with the JDex entry. Metadata is a fundamental component of the entry: it's not some separate thing to manage.
Here's how that might look with a handful of entries from the Life Admin System. Remember: purple box = category. Blue note = ID.
Figure 22.00.0182C. IDs from the Life Admin System with metadata.
Metadata should be standardised
You should decide on a common set of metadata 'keys' and use them consistently. This allows you to query it later: for example, finding all entries where Last updated: is after '2026-01-29', or all entries where Location: is 'fireproof lock box'.
This is only possible if you always call it Location. If you sometimes call it Where is it?, that's now a different thing.
Nerd note: we call these key/value pairs. Keys are standard. Their values change. As keys are special, I'll show them like this in this post.
This is an important distinction as we consider what's metadata vs. data, below.
In the near future, the Johnny.Decimal system will recommend a standard set of metadata keys. If I use it in this post, you may assume it will be in that standard.
'Above the line'
Note the horizontal line in these diagrams. That's deliberate: in my JDex, I use a line to separate metadata from data. Metadata is always 'above the line'. We'll see data 'below the line' shortly.
Above the line: the book's metadata.
Below the line: the book's contents.
Storing data in your JDex
In contrast to metadata, your library's index cards weren't designed to store the data that they represent. They aren't the book's contents.
But nor are computers 5×7" pieces of cardboard stored in a little metal box. So in this new world, you might choose to extend the usefulness of your index entries by using them to store data.
Figure 22.00.0182D. An ID entry with data 'below the line'.
Here, we're still recording the location of your passport as metadata 'above the line'. But we're also storing a bunch of other data 'below the line'.
Nerd note: we call this 'structured' vs. 'unstructured' data. Here metadata is structured, with standard keys. Our below-the-line data is unstructured: it's different for each entry.
This is what I do and recommend. It's simple. Everything's in one place. It's very hard to lose information, and very quick to retrieve it.
To address the alternative scenario, we need to expand this model.
Storing data outside your JDex
Back to the physical library. There, index cards point at books, and books contain data.
Figure 22.00.0182E. A book's index card points to the book's contents.
In exactly the same way, we can move the data out of our JDex note and into its own file.
Figure 22.00.0182F. A JDex entry points to its data.
Here, 11.12 Passports, residency, &… is an ID folder in our filesystem (orange). We've created 2 files there (yellow): one for our partner's passport, and one for our own. And we've moved the data out of the index entry into these text files.
To be sure we don't forget about this data, we've reminded ourselves about it using a Data property. (This is optional. But if in doubt, it's worth doing. It takes no time.)
A note on Location
Remember that this metadata relates to our passports. As in, the little book you show at immigration: not the Johnny.Decimal ID that is the concept of your passport.
That's why Data is split from Location. We have data about the passports, which is in some place. And, separately, the (physical) location of the passports.
But let's not get distracted. I'll have more to say about Location in a future post.
Recap
To recap, we've introduced two scenarios.
You store data in your JDex entry, below-the-line.
You store data externally, and (optionally) reference it above-the-line.
It might be interesting to note that here at JDHQ we use both of these methods. Johnny tends to prefer the first. Lucy, perhaps because of her career as a copywriter and editor, leans towards the second. They happily co-exist.
Quadrant chart
Let's introduce the beginnings of what will become a quadrant chart. This will help us to orient these concepts relative to each other.
The left/right split represents the scenarios just discussed.
Figure 22.00.0182G. All quadrant charts are born as two rectangles. True fact.
This is a continuum
There's a hard line down the middle of this diagram, but life isn't like that. The reality is that this is a continuum, with most cases somewhere in the middle.
To the far left of this diagram is the situation where you exclusively store data in your JDex. This is unrealistic: will you never save a file? Files are data. So when I say that you 'store data below-the-line', I mean some data: usually textual data, as it's natural to add that to the existing JDex entry. As soon as you also save a file, or keep any artefact related to this ID outside the JDex entry, you've moved away from that left edge.
The right edge of this diagram is somewhere you might actually exist. In this situation, you never store any data in your JDex. For example, your JDex might be an Excel spreadsheet, with one row for each ID.2 Excel is a terrible place to try to write any sort of long-form text, so it would make sense for no data to be stored there.
This diagram is just useful for us to map out all the ways we can do it. But none of them is more correct than the others. It's all preference.
Where is your JDex stored?
There's another fundamental question to be addressed, and it relates to the storage of your JDex. We've established that you have a JDex. Where is it?
This is another continuum, the opposing sides being:
Store your JDex as individual files in your filesystem.
Store your JDex as some other artefact, not in your filesystem.
Let's explore these options.
JDex as individual files in your filesystem
What we mean here is simply that each JDex entry is an individual file, usually a text file, often formatted with Markdown.
You could manage these manually, but it's far more common to use an application, and the most common of those is Obsidian. For those who don't know it, you point it at a folder and it'll show you the structure of that folder and all of the Markdown files in it. Select a file from the left, edit it on the right. JDHQ provides downloads for the Life Admin and Small Business Systems that are designed to work with Obsidian.
Figure 22.00.0182H. Obsidian.
People enjoy Obsidian because it sits over your files: it's a convenient utility. But if you get sick of Obsidian, or if they decide to charge $500/year for a licence, or if you want to give some other app a try, you can just stop using it. All of your files are still Markdown files in a folder that you control.
The vertical axis of our chart now represents where your files are stored. Obsidian sits in the top half, where your JDex is 'files in your filesystem'. It spans both top-left and top-right quadrants as you still have the choice of where to store data: in your JDex, or externally.
Figure 22.00.0182J. Quadrant chart with Obsidian represented.
JDex stored elsewhere
There are a lot of options in this bottom half, so let's explore the simplest.
The other app that I love and support (with specifically-formatted downloads from JDHQ) is Bear. Superficially, Bear is identical to Obsidian: list on the left, editor on the right.
Figure 22.00.0182K. Bear.
But there is a crucial difference: Bear manages these JDex entries entirely for you. There's no concept of 'text files on your disk' to manage. There's no option in Bear to change the location of these files. You can't, outside of Bear, go and look at them. Another app can't edit them. You can't put them in a shared folder and collaborate with your partner.
(Technically, in fact, they aren't even a collection of text files. Rather, Bear manages them using a database which they 'highly recommend' that you do not touch.)
This sounds limiting, and in many ways it is. Why would you want this? Well, it's a lot simpler. You load your JDex files into Bear once, and then never have to think about where they are. Want them on your iPhone? Just install Bear and they'll appear. They're harder to lose because Bear stores them in your iCloud Drive so if you drop your laptop in a lake, no worries.3
Many apps work like this: Apple Notes, Craft, Evernote, Google Keep, Notion, and OneNote to name a few.4
Let's keep it simple and add Bear to our quadrant chart.
Figure 22.00.0182L. Quadrant chart with Bear represented.
The point of this distinction is that when you're using a method in the bottom half of this chart -- more of which we'll see shortly -- the decision of where to store your JDex is made for you. This is one less thing for you to have to manage.
Where are your JDex files?
A question naturally arises: if we're talking about you storing your JDex as 'files in your filesystem', where are those files? As usual, there are a few options.
Ignoring your JDex for a moment, here's a simple representation of your filesystem. As in, the Johnny.Decimal structure, probably in your Documents folder or on a cloud drive, where you save all your stuff.
▓ 10-19 Life administration/
▓ └── 11 Me & other living things/
▓ ├── 11.11 Birth certificate & proof of name
▓ ├── 11.12 Passports, residency, & citizenship
▓ └── …and so on
__________________
Figure 22.00.0182M.
The dark blocks on the left will make sense shortly. These tree diagrams don't work well on mobile, sorry. I've made sure that they are clear and flow properly at larger sizes.
It's pretty obvious that if you had a PDF that you needed to save -- say your latest passport renewal -- you'd put it in 11.12. Because it's a file, that you're saving in your filesystem. No new concepts there.
So if we're storing our JDex as files in our filesystem, it must be in here somewhere. Where else would it be?
This is one of the IDs that are reserved by the Johnny.Decimal system, and I hope the ID is obvious. It's the very first thing you should encounter in your system, the top of the tree: 00.00. It lives inside the system-reserved category 00.
▓ 00-09 System-management area/
▓ └── 00 System-management category/
▓ └── 00.00 JDex for the system
▓ 10-19 Life administration/
▓ └── …and so on
__________________
Figure 22.00.0182N.
Happy with that? There's a lot going on, and it's about to get deep. Get comfortable with that before we move on.
What goes in 00.00?
Above, we noted how Obsidian just watches a folder full of files and lets you edit them. The screenshot (figure 22.00.0182K) shows those folders in the left pane, full of text files. Those folders look like your Johnny.Decimal system. As this is your JDex, they define your system. So they look something like this.
No, I didn't duplicate the previous figure by accident. Note the subtle difference from figure 22.00.0182M: now the IDs are Markdown (.md) files, which is what Obsidian will display in the right pane.
So if we stitch these two diagrams together, this is what your filesystem looks like.5
▓ 00-09 System-management area/
▓ └── 00 System-management category/
▓ └── 00.00 JDex for the system/
░ ├── 00-09 System-management area/
░ │ └── 00 System-management category/
░ │ └── 00.00 JDex for the system.md
░ └── 10-19 Life administration/
░ └── 11 Me & other living things/
░ ├── 11.11 Birth certificate
░ │ & proof of name.md
░ └── 11.12 Passports, residency,
░ & citizenship.md
▓ 10-19 Life administration/
▓ └── 11 Me & other living things/
▓ ├── 11.11 Birth certificate & proof of name
▓ ├── 11.12 Passports, residency, & citizenship
▓ └── …and so on
__________________
Figure 22.00.0182R.
Be really comfortable with that before we continue. At first glance this looks bonkers. But when you understand that 00.00 is a Johnny.Decimal ID that just happens to contain a folder structure that mirrors the overall structure … it's fine. And you're never spending any time in there yourself: let Obsidian manage it.
This is the 'partially nested' pattern
When you download your JDex files from JDHQ, there are a handful of options for Obsidian. This is the 'partially nested' pattern, in that the ID files are nested within an area/category structure. Obsidian shows this structure in the left pane.
There's an alternative 'flat' structure, in which those Markdown files aren't nested within an area/category structure. Some people just prefer that. I'll leave it as an exercise to the reader to imagine how this looks in your filesystem. (You can download these files from JDHQ as many times as you like: try it out. Note that you'll need to select Obsidian as your JDex app first. Both links require a JDHQ account.)
Let's place a dot on the diagram that represents these patterns. We're not concerned with whether we're storing data in our JDex or not, so we'll stick it in the middle.
Figure 22.00.0182S. Obsidian on the quadrant chart.
The 'fully nested' pattern
In the scenario just described, your JDex files were separate from the rest of your files: they're all saved at 00.00 while the rest of your stuff is below.
What if you didn't want to do this? What if you wanted to merge these two structures -- JDex and filesystem -- so that there was only one? This is possible, of course, and we call it the 'fully nested' pattern.
In this pattern, folder 00.00 goes unused.6 Instead, each of the Markdown files that are your JDex entries are scattered through your filesystem. Again, I've used the dark and light blocks to represent which components come from the original diagrams.
▓ 00-09 System-management area/
▓ └── 00 System-management category/
▓ └── 00.00 JDex for this system/
░ └── 00.00 JDex for this system.md
▓ 10-19 Life administration/
▓ └── 11 Me & other living things/
▓ ├── 11.11 Birth certificate & proof of name/
░ │ ├── 11.11 Birth certificate
░ │ │ & proof of name.md
▓ │ └── Photograph of birth certificate.jpg
▓ ├── 11.12 Passports, residency, & citizenship/
░ │ ├── 11.12 Passports, residency,
░ │ │ & citizenship.md
▓ │ └── Passport application.pdf
▓ └── …and so on
__________________
Figure 22.00.0182T.
On the surface, this feels like a good idea. Why wouldn't you want it all integrated? But me and many others have tried this and the universal consensus is that this doesn't work.
The point of this article is to explain these methods, not to tell you why one of them is worse than the others. So, just briefly:
The clean separation of JDex from filesystem is a nice mental separation. It reinforces your JDex as being the more important thing. Merging them breaks that.
Previously, you were pointing Obsidian at a folder full of tiny text files. Now, you're pointing it at your entire filesystem. There might be terabytes of data in there. This can slow it down significantly.
You're no longer able to selectively synchronise folders because the folder is the JDex entry. This was the killer blow for me.
So you probably shouldn't do this. For completion, let's add it to the diagram.
Figure 22.00.0182U. Two Obsidian options.
I've placed this item nearer the top of the chart as your JDex files are more 'in your filesystem' using this method than they were with the previous method.
Core concepts finished
That's it for core concepts. In the next section, let's fill out the diagram with some examples.
Examples
There are a handful of scenarios that we already understand. The diagram's getting busy so I'll extract some meaning to a table.
Figure 22.00.0182V. I honestly thought I was going to run out of letters for the captions on this one. Close call.
Data stored
JDex stored
Obsidian 1
in JDex
'fully nested'
Obsidian 2
in JDex
'partially nested'
Obsidian 3
externally
'fully nested'
Obsidian 4
externally
'partially nested'
Bear 1
in JDex
managed by Bear
Bear 2
externally
managed by Bear
Excel
externally
as an .xlsx in your filesystem
Google Sheets
externally
in your Google Drive
Notion
within Notion
by Notion in their cloud
SQLite database
within the database
as a .db file in your filesystem
Airtable
externally
by Airtable in their cloud
Nit-picky points
We're getting in the weeds here, but I positioned those dots carefully so we might as well explain why.
The left/right split isn't so interesting: you either store data in your JDex, or you don't. Your choice here might be limited by the method you use to store your JDex. As we've already said, a spreadsheet is a terrible place to keep long-form notes; spreadsheets tend towards the right edge.
If you use a database like Notion, it's up to you. In this diagram I've designed a Notion where you do keep data there. Given Notion's power, this feels like a realistic scenario. But you could just have well designed a pure JDex-only database and stored all data externally.
Where the files are, on the vertical axis, is a touch more interesting. Google Sheets is in the lower half while Excel is in the upper because the Excel sheet is a file that you need to manage, and the Google Sheet exclusively lives in your Google Drive.
That said, you do still need to specify where in your Google Drive the file exists. You still need to file it somewhere, even if that somewhere happens to be in the cloud. Which is why Google Drive is above Airtable, an online database. With Airtable there is no consideration at all 'where' your database is stored. You don't get a folder structure that you have to 'file your database' in. (Other than a rudimentary dashboard that you can rearrange.) Your database is just at Airtable.
Similarly, Notion is at the bottom, because all of your stuff is in the Notion cloud. Conversely you might prefer to build yourself a SQLite database which is now a file that you need to manage. As we now know, that database should be stored at 00.00.
That'll do
I wrote this article to set up a framework that I can use in the future; a diagram into which I can slot more scenarios. This isn't meant to be comprehensive. There are a thousand scenarios that aren't included here.
100% human. 0% AI. Always.
Footnotes
You might be thinking, 'that's not how I use my library'. But when you go to the library you're likely browsing the shelves. You don't know what you're looking for. That's a different behaviour from going to a reference library and recalling a specific book, which is the analogy here. ↩
A spreadsheet provides a nice way to think about your metadata. If each row is an ID, your columns store metadata. It's easy to see how you'd have one standard column for Location, and that adding another column for Where is it? is obviously silly. Columns define the keys of our key/value pairs. ↩
If you use these apps, an important consideration becomes 'lock-in'. How easy is it to export your data, if you choose? Bear makes this easy. Others might make it harder, or it might not be practical. Craft and Notion, for example, are database apps. You can't just export a database to a bunch of text files and expect it to make sense. ↩
Actually, I add another folder below 00.00 that contains both of the folders 00-09 and 10-19. This is because Obsidian doesn't allow you to 'rename' a vault: the vault name is just the name of the folder of files that you have opened. In the situation shown, the vault name would be '00.00 JDex for the system'.
I manage multiple vaults and want a more descriptive name. My business system's folder is named 'D25 JDex': D25 is the system identifier. This then appears as the vault name. In the screenshot above (figure 22.00.0182H) you can see the vault name is 'JDex - Life Admin System'. This is how it's downloaded from JDHQ. I left this detail out of the diagram as it's not important to the fundamental concept. ↩
Other than to store its own JDex entry, as shown. ↩
I almost lost a bunch of data this week. Here's how my backups saved me (just), and what I've changed.
Warning: thinking about backups can make you sleepy. But if you care about your data, you need to pay attention. There's a lot here: ask for help on the forum and Discord. This is important.
Note: now that I've finished this post, yeah, it's long, and complex. I might turn this into a short YouTube series. Let me know if that'd be helpful.
What happened
Actually this started months ago. I accidentally deleted my entire Documents folder! I was being too quick on the keyboard and I literally just Cmd+A then Cmd+Deleted everything. Turns out that's surprisingly easy to do.
I undid by pressing Cmd+Z a second later. But the damage was done. I store my Documents folder in iCloud Drive and I knew as soon as it happened that I'd likely triggered a cascade of network activity. My command to delete would have still been half-way to the cloud when I issued the command to undo (i.e. move those files out of my Trash back to Documents), and it would have been a more robust service than iCloud that handled that without issue.
What I should have done at that point was stop everything and spend a few hours making sure everything was okay. I didn't do that.
Lesson 1. If something like this happens you MUST drop everything and fix it, completely and properly, before doing anything else.
Discovering the issue
This week I finally found time to upgrade myself to Life Admin v2.1 I thought I'd take the opportunity to really clean up some old stuff, and in doing so I realised that a bunch of really old archive stuff was missing.2 The folders were there, but there was nothing in them.
Lesson 2. If it doesn't have a proper Johnny.Decimal ID, it might as well not exist. In my case, files buried in an 'archive' folder were, in fact, important to me. I was just too lazy to organise them properly. I've since given them their own IDs in a new area, 29 The Omnium.
Even if these files had their own ID, I might not have picked this up. Everything looked okay, in the moment.
These missing files were buried pretty deep -- this thing that I'm archiving is itself an old Johnny.Decimal structure.3 How would I have known to go looking for this file, way down at 41.14?
Lesson 3. I'm going to keep a short list of 'canary files'. They're files that I care about, but rarely look at. I will manually check them in case of a 'restore event'.
So now I need to restore from backup
Obviously I have backups. I wrote a whole series about it. But backups are complex, mine especially so. This is of my own making, but also hard to avoid given my situation. (I won't recap here: read that series.)
Here are a few of the problems I encountered.
As soon as you actually need to restore from a backup because you lost some data, shit gets real if you'll excuse my language.
Stress levels rise. Everything becomes difficult.
This is not a time to be confused.
Lesson 4. Good records are essential. This is what your JDex is for. See below for a template that you can download.
Not knowing which backup, as in from which date, to restore.
Because I can't remember exactly when this data-loss event occurred. I think it was around August? That's 4 months ago.
I use Arq and it allows you to search, really quickly, for a specific file across all of your backups. This made things much easier.
Its restore is also really fast, and this backup was local, i.e. a hard drive attached to the server, not cloud. So I could quickly do test restores to identify which was the right one.
Lesson 5. Know your backup software. Use good software: if yours isn't good, find another. I recommend Arq for Mac.
Lesson 6. Local backups are better than cloud backups when it comes to restoring data because they're much faster. (But you still need an offsite copy, in case your house burns down.)
Not knowing which backup, as in literally which one, to restore.
Because the way I'd configured them -- which I think is the way most people configure them -- didn't make it really obvious where things were.
So when I came to use them, there was confusion. Not what you want in a time of crisis.
More on this below.
Lesson 7. Change the way you structure backups. Previously, I backed up a hard drive or a computer. But what's actually important to me is the unit of data which is a Johnny.Decimal system.
My backups are encrypted.
Which they should be. And I use 1Password, and my vault is reasonably neat.
But when I tried the password I thought it should be, it failed. Shit. I tried another. Failed! Oh, shit. This is bad.
I must have mis-copy/pasted the first password. When I tried again, that worked.
But to have any possibility of doubt at all here is a total failure.
Ref: Lesson 4. Good records are essential.
'Set and forget'
This whole thing got me wondering if we're doing it all wrong. Backup services advertise themselves as 'set and forget': just install, then never think about it again!
That's what got me into this problem.
If you know about backups you're shouting at the computer: but Johnny, you should test your backups regularly and then you wouldn't be in this situation!
That's true. But you know what I think most people do when forced, begrudingly, to test their backups? They think of a file that they deleted from their Desktop yesterday and they restore that. This is not a realistic test. It doesn't stress you, or your system.
Lesson 8. Daily 'set and forget' backups are great and you should not stop doing them. But you should also have a subset of backups that are a lot more deliberate, and are manually activated. You must see these backups as an opportunity to be neat and tidy. You must cherish them. They are not a chore. They are your happy place.
Conscious backups aka 'restore points'
Let's address that last lesson 8 first. I think there's room in life for two types of backups. The first is the one I've already got: the 'set and forget'. 90% of the time this is going to do the job. Don't stop doing those.
But the other type is what I'm calling a 'restore point'. In the IT world, a restore point is a backup at a specific point in time. You're saying that you can restore to this point in time.
They are much more deliberate, which means that they take more work. But they give you the sort of 'comfortable awareness' of your system that only comes from giving it your clear attention.4
Today is a restore point
I finished my migration to LAS v2 today, and with it a bunch of tidying up and moving around of data.
Today -- now -- my system is in a really nice state. I've just looked at everything and I'm deeply familiar with it. I'm sure it's good. Today is a restore point.
So I'm configuring a new backup. Here's what makes it different from my daily backups.
'Restore point' rules
They are manually triggered.
When I decide the time is right, I'll run this backup. I'll be adding something to my task system to prompt me to do this, perhaps quarterly.
They have no retention rules.
Normal backups are smart about the data they retain. Your computer might perform a backup every hour, but you don't need to keep every one of those backups. That would be a waste of space.
A typical 'retention cycle' might be to keep:
Every hour for the last 72 hours.
Every day for the last 30 days.
Every week for the last 52 weeks.
Every month forever.
My 'restore points' have no retention cycle. Every one is saved forever.
They have no exclusion rules.
My daily backup excludes a bunch of files and folders. Anything that looks temporary, or like a cached version, or like something I've downloaded, or like something that I know I could get from the cloud, is excluded.
For my daily rules, this keeps them lean and fast. This is particularly important for me as I live on the road. But that isn't relevant for 'restore point' backups. It is more important that they are complete, so they have no exclusions.
The structure of a 'restore point'
Per lesson 7, these backups don't just back up an entire drive, say, or my user folder. (That's what your daily backups probably do, which is appropriate in that context.)
Instead, they back up the specific folders and only those folders that make up the Johnny.Decimal system that they target. I have 2 restore point backups: one for my P76 Personal system, and another for D25 Johnny.Decimal, the business system.
The personal system has files split across 2 locations:
My ~/Documents folder. This is stored in iCloud.
An external hard drive. This contains ~500GB of media files that don't need to use cloud storage space.
So it only backs up these folders. Not my whole laptop. No other cruft. Nothing is excluded: if something is in one of those folders, it's because I put it there, and I want it backed up.
The business system is simpler as everything is in a single folder. So that folder is the only one in the restore point backup.5
Local backups using Arq
These backups all take place on an always-on Mac mini which is connected via Ethernet to a Synology. We learn from lessons 5 and 6: use good software that you know well to make local backups because they're fast.
Records-keeping
Lesson 4 taught us that good records-keeping is vital. You must have total confidence in these backups, and have good records that you can depend on in a crisis.
Of course, this is what your JDex is for. LAS and SBS both have a place for backups. At LAS that's 14.14 My data storage & backups, so in that note I have left myself copious notes. I've recorded:
The name of the restore point backup (in Arq).
Its data sources (the folders it backs up).
Its target (where it backs up to).
Its frequency (manual).
Any exclusion and retention rules (none).
Where its encryption keys are held (1Password: be explicit and name the entry).
Then, every time I manually trigger a backup, I'm noting:
The date & time of the backup.
How large each of the folders was at the time (right-click in Finder and Get Info).
That I checked my canary files.
You want this to be a story that you can return to. A place of total comfort.
Use my template
I've knocked up a template that I'm using myself. It's way far from perfect, but it does the job. Get them here.
Using the 3-2-1 method of data storage -- see the notes on pages 3 & 4 in the PDF, or search online -- it has a space for you to record:
Where the primary copy of your data is.
This is the folder that you back up.
Where the secondary copy of your data is.
Where the backup of your data is.
Which of the secondary or backup is your offsite copy.
We added a handful of IDs to v1 and moved things around in the finance area to align it with the Small Business System. ↩
Specifically, the file that started Johnny.Decimal: a carefully-designed PDF of ticket prices that was stuck to our office wall in 2010. I wanted a way to reference this file that wasn't an ugly file path, so I came up with the idea of the numbers.
That was 15 years ago, and I have no use for this file today. I think about it perhaps once a year. But I'd be devastated if I lost it.
It's a little slow around here at this time of year. I'm spending time with my family in the UK; my first Christmas here since 2002! The year after, I went to Australia as a backpacker and just never came home. I like to say that I moved to Australia by accident.
So here's something for the holidays. I have access to a kitchen again, and the first thing I did was go and buy fresh tea towels. If you care about cooking, this is how you need to tea towel. This is the two-tea-towel method.
Theory
When you're in the kitchen, you have two requirements in the cloth department.
First, you need one permanently slung over your shoulder that you can use to wipe your hands, the bench, to handle a pan, or mop a spill. This is your 'bench towel'.
Second, you need to dry things that you'll eat with, or off. You'll need to polish a glass or a spoon. This is your 'clean towel'.
Implementation
You need two (2) types of towel to meet these two needs.
Let's start with your clean towel. This needs to be a decent towel that can actually dry a thing. It needs a bit of fuzz to it, y'know? Your clean towel can't be one of those that feels like an old pair of underpants even when it's brand new.
Spend a tiny bit of money on your clean towels. For me. If you're on a budget, your bench towel doesn't need to be as fancy.
As a rule, simple cotton does the job. Avoid linen. And of course wash them before use to fluff them up a touch.
They must be visually distinct
It is vital that these two towels are different so that you can see, at a glance, which is which. If possible, don't choose a dark coloured 'clean towel': you want to be able to quickly intuit when it's no longer clean.
So find yourself two types of towels.
Clean: light (in colour), heavy (in weight and drying power).
Bench: dark (in colour), and can be cheaper.
We used these stalwarts from IKEA for many years. I used 'stripes' as clean and 'checks' as bench; while I wish they were more visually distinct, it worked. Importantly, the fabric suited both jobs perfectly and the price is right. Because...
Buy a dozen of each
That's twelve (12), six plus six. Two more than ten. Of each, not total. Eight more than you thought you needed before you read this article.
No, that is not too many. You must be able to use these towels freely. This is core to the method: you will never, ever use a dirty towel. You will use them with reckless abandon. You will be utterly free in your towel usage. This will change your life.
When we lived in a house and I prepared three meals a day I used at least one set of towels a day. On a normal day, one set is enough. It depends what you cook. But the instant you look at a towel and wonder as to its cleanliness: get a fresh one. Chuck yours in the wash.
You need hooks
This tip courtesy of my mam, who currently doesn't have hooks. Your tea towels must be on a hook, attached to an open wall. This removes all friction, literally.
Don't go hanging them over the oven handle or some other hack. Get a 3M hook and stick it on the wall. No excuses.
The weekly towel wash
We were lucky in having a house with a washing line (and generally sunny weather, in Canberra). As I sit and type this in a basement flat in the UK with my thermal top drying on a radiator I realise this situation is not universal. But do what you can.
At the end of the week, you've got 14 towels to wash. Chuck in some bathroom towels and that's a full load. Now you can hang them all out together and fold them together and that'll speed things right up.
Merry Christmas
Have a lovely Christmas. It's not too late to pop down the shops to get yourself a last minute gift. I'm sure IKEA is delightful today.