2435 words
12 minutes
Why use a microblog when mastodon/bluesky/tumblr/etc. exists?

Why?#

I saw a mastodon mutual of mine open an account with micro.blog recently in order to have a non-SNS place to ramble1. They go into some pretty insightful and personal details (do check them out!), and that made me wonder about my reasons as well.

I keep a separate microblog as well, though sometimes I consider it more of a statuslog. I don’t post on there as often as I do on my mastodon account, but I have noticed that I’m a little hesitant to post the random thoughts or feelings I have on mastodon. For example, I find it really fun to post about my game or book pickups on mastodon because other people might be amused by that (and I want them to see those posts and maybe spark some interaction).

But I’m more hesitant to complain about how my day was, or when I’m feeling under the weather. I think most people wouldn’t find it interesting to hear about how often I have migraines, for example. And sometimes I just wanna yap about whatever show or book I’m into at the moment, even when I don’t have particularly deep thoughts about it.

“But Navi,” I hear you say, “Isn’t that what this blog is for?”.

Dear imaginary reader, yes and no. It’s true that if I have more than a couple sentences to say about something, I would rather post it on my blog(s) if it falls into one of my established categories (ranobe, games, book reviews, etc). But sometimes your girl just wants to talk about the awesome ice cream she ate last night or the post-series depression from a really good litrpg series.

And honestly, sometimes I cross-post these spur of the moment posts to my mastodon anyways, after thinking “why shouldn’t I post it there too??”. It’s helpful to have a place where you can be a little more… “relaxed” first and maybe vent a little. Anyways, those are my rough reasons for bothering to have a microblog/statuslog even though I’m already pretty active on mastodon.

Because I plan on putting this blog post in the “webdev” category, let’s talk about some more practical and fun stuff now lol.

How it’s made#

I initially started with the Pastille template by NomNomNami. Mary linked to it, and it got me thinking about how I might want to “archive” some mastodon posts. I was excited to use it since it was totally just an HTML page with some fancy CSS, with no other complicated static site generation tools involved.

However, while I was tinkering with the code, there were some modifications I wanted to make. I had these ideas over time, and slowly added them to my page by asking Gemini for code solutions and tweaking things when they didn’t work. Below, you can find some of the customizations that I did to the Pastille template.

1. Adding post IDs#

I simply used anchor links for these!

2. Customizing the color scheme#

I picked some colors from the avatar I was planning to use, and edited the CSS.

3. Creating an RSS feed with a script#

I used a python script and a github action for this. I can manually trigger the script to run to generate an RSS feed when I’m done posting.

RSS Generation Script

import os
import re
import zoneinfo
import PyRSS2Gen
from bs4 import BeautifulSoup
from datetime import datetime
import xml.dom.minidom
import io
# CONFIGURATION
SITE_URL = "https://now.pomnavi.net"
SITE_TITLE = "What is Navi up to now?"
SITE_DESC = "Doing geeky things probably."
PACIFIC_TZ = zoneinfo.ZoneInfo("US/Pacific")
def get_dt_object(date_text):
"""Helper to convert emoji-string to a real datetime object for sorting."""
try:
clean_date = re.sub(r'[📅🕐]', '', date_text).strip()
clean_date = " ".join(clean_date.split())
dt = datetime.strptime(clean_date, "%d %b %Y %I:%M %p")
return dt.replace(tzinfo=PACIFIC_TZ)
except Exception as e:
print(f"Date error: {e}")
return datetime.now(PACIFIC_TZ)
# 1. Load HTML
with open("index.html", "r", encoding="utf-8") as f:
soup = BeautifulSoup(f, "html.parser")
posts = soup.find_all("article", class_="post")
temp_list = []
for post in posts:
post_id = post.get("id", "0000")
time_tag = post.find("time")
# Get datetime object for sorting
dt_obj = get_dt_object(time_tag.get_text()) if time_tag else datetime.now(PACIFIC_TZ)
# Format the string for the RSS XML (RFC 822)
formatted_date = dt_obj.strftime("%a, %d %b %Y %H:%M:%S %z")
description = " ".join([p.text for p in post.find_all("p")])
rss_item = PyRSS2Gen.RSSItem(
title=f"Post #{post_id}",
link=f"{SITE_URL}/#{post_id}",
description=description,
guid=PyRSS2Gen.Guid(post_id, isPermaLink=False),
pubDate=formatted_date
)
# Add a tuple to our list: (the datetime object, the rss item)
temp_list.append((dt_obj, rss_item))
# 2. Sort by the actual datetime object (the first item in the tuple)
# reverse=True puts the newest posts at the top
temp_list.sort(key=lambda x: x[0], reverse=True)
# Extract only the RSSItems for the generator
rss_items = [item[1] for item in temp_list]
# 3. Generate the RSS Object
current_build_time = datetime.now(PACIFIC_TZ).strftime("%a, %d %b %Y %H:%M:%S %z")
rss = PyRSS2Gen.RSS2(
title=SITE_TITLE,
link=SITE_URL,
description=SITE_DESC,
lastBuildDate=current_build_time,
items=rss_items
)
# 4. Format/Pretty-Print the XML
tmp_file = io.StringIO()
rss.write_xml(tmp_file, encoding="utf-8")
raw_xml = tmp_file.getvalue()
dom = xml.dom.minidom.parseString(raw_xml)
pretty_xml = dom.toprettyxml(indent=" ")
# 5. Save the pretty file
with open("feed.xml", "w", encoding="utf-8") as f:
f.write(pretty_xml)
print("Chronological Pretty-printed RSS Feed updated successfully!")

RSS Generation Github Action

name: Generate RSS Feed
on:
workflow_dispatch: # This allows you to click a button to run it manually
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: 0 # Ensures it gets the absolute latest commit
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.x'
- name: Install dependencies
run: pip install beautifulsoup4 PyRSS2Gen tzdata
- name: Run generator
run: python generate_rss.py
- name: Commit and push if changed
run: |
git config --global user.name "github-actions[bot]"
git config --global user.email "github-actions[bot]@users.noreply.github.com"
git add feed.xml
git diff --quiet && git diff --staged --quiet || (git commit -m "Manual RSS Feed Update" && git push)

4. Hiding posts when there are more than 20 of them (with a load-more button)#

5. Making the tags in the bottom of each post actually functional when you click on them#

Items 4-5 were accomplished with the following Javascript which is located at the bottom of my index page:

<script>
// CONFIGURATION
const postsToShowInitially = 20;
const amountToLoadEachTime = 20;
window.addEventListener('DOMContentLoaded', (event) => {
const allPosts = document.querySelectorAll('.post');
const loadMoreBtn = document.getElementById('load-more-btn');
const endMsg = document.getElementById('end-message');
// 1. Initial Visibility Logic
if (allPosts.length <= postsToShowInitially) {
if(loadMoreBtn) loadMoreBtn.style.display = 'none';
}
// 2. TAG CLICK LOGIC (Fixed for Radio Buttons)
document.addEventListener('click', function(e) {
if (e.target.classList.contains('inline-tag')) {
e.preventDefault();
const selectedTagId = e.target.getAttribute('data-tag');
// Find the radio button that matches the tag ID
const targetRadio = document.getElementById(selectedTagId);
if (targetRadio) {
targetRadio.checked = true; // "Click" the radio button
// Trigger the change event so the CSS/JS filtering kicks in
targetRadio.dispatchEvent(new Event('change'));
window.scrollTo({ top: 0, behavior: 'smooth' });
}
}
});
// 3. Handle Direct Hash Links (e.g., site.com/#0001)
if(window.location.hash) {
try {
const target = document.querySelector(window.location.hash);
if (target) {
target.classList.remove('is-hidden');
target.style.display = 'block';
target.scrollIntoView();
}
} catch(e) { console.error("Hash error:", e); }
}
// 4. Load More Button Logic
if (loadMoreBtn) {
loadMoreBtn.addEventListener('click', function() {
loadMoreBtn.classList.add('loading');
setTimeout(() => {
const hiddenPosts = document.querySelectorAll('.post.is-hidden');
for (let i = 0; i < amountToLoadEachTime; i++) {
if (hiddenPosts[i]) {
hiddenPosts[i].classList.remove('is-hidden');
hiddenPosts[i].style.display = 'block';
}
}
loadMoreBtn.classList.remove('loading');
// Check again: if no more hidden posts, hide the button and show end message
if (document.querySelectorAll('.post.is-hidden').length === 0) {
loadMoreBtn.style.display = 'none';
if (endMsg) {
endMsg.style.display = 'block'; // THIS LINE FIXES IT
endMsg.classList.add('visible');
}
}
}, 600);
});
}
// 5. Filter Change Logic
const filters = document.querySelectorAll('input[name="tag"]');
filters.forEach(filter => {
filter.addEventListener('change', function() {
if (this.id !== 'all') {
if(loadMoreBtn) loadMoreBtn.style.display = 'none';
if(endMsg) endMsg.style.display = 'none';
} else {
if (document.querySelectorAll('.post.is-hidden').length > 0) {
if(loadMoreBtn) loadMoreBtn.style.display = 'block';
if(endMsg) endMsg.style.display = 'none';
} else {
if(endMsg) endMsg.style.display = 'block';
}
}
});
});
}); // End of DOMContentLoaded
</script>

6. Creating archive pages#

7. Creating a function that auto-redirects you when the post was moved to an archive.#

For 7 and 8, I use two files, a config file that stores which posts are located in what archive page, and another script added to the index page where it builds out the navigation menu for the archive pages.

config.js

const SITE_CONFIG = {
// Update this once a year when you move posts
archives: [
{ year: "2025", lastId: 5 }
],
currentYear: "2026"
};

Archive and Navigation script: It’s placed in the head tag of the index page.

<!--navigation and locator-->
<script src="config.js"></script>
<script>
window.addEventListener('DOMContentLoaded', () => {
const hash = window.location.hash.replace('#', '');
const path = window.location.pathname;
// 1. REDIRECT LOGIC
if (hash && !document.getElementById(hash) && !isNaN(hash)) {
const idNum = parseInt(hash, 10);
const target = SITE_CONFIG.archives.find(range => idNum <= range.lastId);
if (target) {
window.location.href = `archive-${target.year}.html#${hash}`;
}
}
// 2. NAVIGATION GENERATOR
const nav = document.getElementById('yearly-nav');
if (nav) {
// Start with the "Now" (index) link
const isNow = path.includes('index.html') || path.endsWith('/');
let html = `<a href="index.html" class="${isNow ? 'active' : ''}">Now</a>`;
// Add each year from your config
SITE_CONFIG.archives.forEach(archive => {
const isCurrent = path.includes(`archive-${archive.year}`);
html += `<a href="archive-${archive.year}.html" class="${isCurrent ? 'active' : ''}">${archive.year}</a>`;
});
nav.innerHTML = html;
}
});
</script>

8. Adding Micropub functionality.#

I kind of didn’t even understand what micropub was, but Mary’s enthusiasm about it convinced me to research and see if I could implement it on my site. Basically it’s like a CMS that allows you to easily create and format posts for your static site. You can use any 3rd party micropub application, but the setup is kind of a weird concept to get your head around.

I spent a couple hours working on getting this, I literally finished this today. You basically have to add some html links to your head tag, and then create either a server or “serverless” function that can receive your posts from the app and format them for your page.

Since the Pastille template is just an html page, adding micropub functionality makes it way easier to post! I don’t have to manually edit the post IDs, dates, time, and html tags. I can just focus on the stupid message I want to send lol.

Since I use Netlify for hosting, I use a Netlify serverless function to accept my Quill posts and turn them into html on my microblog. Obviously I’m not going to post the authorization tokens or anything, but you can see the broad strokes of the function script below. It’s just a toml file, javascript, and a marker in the index that tells the micropub function where to place the new post.

netlify.toml

[build]
command = "npm install"
functions = "netlify/functions"
publish = "."
[[redirects]]
from = "/micropub"
to = "/.netlify/functions/micropub"
status = 200

micropub.js

const axios = require('axios');
const Busboy = require('busboy');
const parseMultipart = (event) => {
return new Promise((resolve, reject) => {
const fields = {};
const files = [];
const busboy = Busboy({ headers: event.headers });
busboy.on('file', (fieldname, file, info) => {
const { filename, mimeType } = info;
let fileBuffer = Buffer.alloc(0);
file.on('data', (data) => { fileBuffer = Buffer.concat([fileBuffer, data]); });
file.on('end', () => { files.push({ filename, mimeType, content: fileBuffer }); });
});
busboy.on('field', (name, val) => {
if (fields[name]) {
if (!Array.isArray(fields[name])) fields[name] = [fields[name]];
fields[name].push(val);
} else { fields[name] = val; }
});
busboy.on('finish', () => resolve({ fields, files }));
busboy.on('error', (err) => reject(err));
const body = event.isBase64Encoded ? Buffer.from(event.body, 'base64') : event.body;
busboy.end(body);
});
};
exports.handler = async (event) => {
if (event.httpMethod !== 'POST') return { statusCode: 405, body: 'Method Not Allowed' };
const authHeader = event.headers.authorization;
if (!authHeader) return { statusCode: 401, body: 'Missing Token' };
try {
let content = "";
let rawCategories = [];
let photoFile;
const contentType = event.headers['content-type'] || '';
// --- DEEP DATA EXTRACTION ---
if (contentType.includes('application/json')) {
const body = JSON.parse(event.body);
// Check properties (Standard Micropub)
if (body.properties) {
content = body.properties.content ? body.properties.content[0] : "";
rawCategories = body.properties.category || body.properties.tag || [];
} else {
content = body.content || "";
rawCategories = body.category || body.tag || [];
}
} else if (contentType.includes('multipart/form-data') || contentType.includes('application/x-www-form-urlencoded')) {
let fields;
if (contentType.includes('multipart')) {
const parsed = await parseMultipart(event);
fields = parsed.fields;
photoFile = parsed.files[0];
} else {
const params = new URLSearchParams(event.body);
fields = {};
for (const [key, value] of params.entries()) {
fields[key] = params.getAll(key).length > 1 ? params.getAll(key) : value;
}
}
content = fields.content || "";
// Look for category, categories, or tags
rawCategories = fields.category || fields['category[]'] || fields.tags || fields.tag || [];
}
// --- NORMALIZATION ---
// Ensure it's an array and handle comma-separated strings
let categoryList = Array.isArray(rawCategories) ? rawCategories : [rawCategories];
if (categoryList.length === 1 && typeof categoryList[0] === 'string' && categoryList[0].includes(',')) {
categoryList = categoryList[0].split(',');
}
const categories = categoryList
.flat()
.filter(cat => cat && typeof cat === 'string')
.map(cat => cat.replace('#', '').trim().toLowerCase());
// --- TIME (PST) ---
const dateObj = new Date();
const dateOptions = { day: 'numeric', month: 'short', year: 'numeric', timeZone: 'America/Los_Angeles' };
const timeOptions = { hour: 'numeric', minute: '2-digit', hour12: true, timeZone: 'America/Los_Angeles' };
const dateStr = dateObj.toLocaleDateString('en-GB', dateOptions);
const timeStr = dateObj.toLocaleTimeString('en-US', timeOptions);
// 1. Image Upload (Shortened for brevity)
let imageHtml = "";
if (photoFile) {
const fileName = `images/uploads/${Date.now()}-${photoFile.filename}`;
await axios.put(`https://api.github.com/repos/${process.env.GH_USER}/${process.env.GH_REPO}/contents/${fileName}`, {
message: `Upload image`,
content: photoFile.content.toString('base64')
}, { headers: { Authorization: `token ${process.env.GH_TOKEN}` } });
imageHtml = `\n <p><img src="${fileName}" alt="Upload" style="max-width:100%; height:auto; border-radius:8px;"></p>`;
}
// 2. Fetch & ID
const indexUrl = `https://api.github.com/repos/${process.env.GH_USER}/${process.env.GH_REPO}/contents/index.html`;
const getFile = await axios.get(indexUrl, { headers: { Authorization: `token ${process.env.GH_TOKEN}` } });
const fileContent = Buffer.from(getFile.data.content, 'base64').toString('utf-8');
const idMatch = fileContent.match(/id="(\d{4})"/);
const nextNum = (idMatch ? parseInt(idMatch[1]) + 1 : 1).toString().padStart(4, '0');
// 3. Build HTML
const articleClasses = ["post", ...categories].join(" ");
const tagLinks = categories.map(tag => `<a href="#" class="inline-tag" data-tag="${tag}">#${tag}</a>`).join("\n ");
const newPost = `
<article class="${articleClasses}" id="${nextNum}">
<div class="post-header">@navi <time>📅${dateStr} 🕐${timeStr}</time> <a href="#${nextNum}"><id>#${nextNum}</id></a> </div>
<p>${content}</p>${imageHtml}
<section class="tags">
${tagLinks}
</section>
</article>`;
// 4. Inject & Push
const marker = '<div id="micropub-marker" style="display:none;">--- MICROPUB-TARGET ---</div>';
if (!fileContent.includes(marker)) return { statusCode: 500, body: "Marker missing" };
await axios.put(indexUrl, {
message: `Post #${nextNum}`,
content: Buffer.from(fileContent.replace(marker, `${marker}\n${newPost}`)).toString('base64'),
sha: getFile.data.sha
}, { headers: { Authorization: `token ${process.env.GH_TOKEN}` } });
// 5. Trigger GitHub Action for RSS generation
try {
await axios.post(
`https://api.github.com/repos/${process.env.GH_USER}/${process.env.GH_REPO}/actions/workflows/rss-generator.yml/dispatches`,
{ ref: 'main' }, // Double-check if your branch is 'main' or 'master'
{
headers: {
Authorization: `token ${process.env.GH_TOKEN}`,
Accept: 'application/vnd.github.v3+json',
'User-Agent': 'Netlify-Function'
}
}
);
console.log("RSS Workflow triggered successfully");
} catch (dispatchError) {
console.error("Failed to trigger RSS workflow:", dispatchError.response ? dispatchError.response.data : dispatchError.message);
}
return {
statusCode: 201,
headers: { "Location": `https://now.pomnavi.net/#${nextNum}` },
body: JSON.stringify({ message: 'Success' })
};
} catch (error) {
return { statusCode: 500, body: error.message };
}
};

Conclusion#

Was all this hard work worth it? I hope so! I’ve made it really easy to make my silly little posts now, and I really like how it looks and works. Here’s to more rambling in the future!

Footnotes#

  1. Why did I make an account on micro.blog and why am I paying a dollar a month to have a place to ramble? by ArimaMary, February 26, 2026.

Comments

Comments are fetched from the Fediverse. You can join the conversation by replying to this post on Mastodon. New replies will appear here after the next site rebuild. If you don't have a fedi account, you can send me your comment here.

0 Replies 0 Boosts Likes
No comments yet.