Skip to main content

Biomedical and Electrical Engineer with interests in information theory, evolution, genetics, abstract mathematics, microbiology, big history, Indieweb, and the entertainment industry including: finance, distribution, representation







John, I has many of the same thoughts you did when I was looking at again as a possible microcast option last week. The relative ephemeral nature of it made me think "it's snapchat for audio". I suspect it may be easier for them not to have to host/stream old audio, though they do keep users' streams if you mark them to be saved. I also have a feeling it's much harder for users to build a native audience on such a platform unless they're bringing an audience with them. I was pained by how much hunting and searching it took to find an option to download and save my recordings to archive personally or to post on my own site.


@Mercy They seem like completely different products to me.
Known is a CMS which allows you to own your data, then publish and syndicate it to multiple platforms (including uploading to Anchor). It doesn't have any audio creation or editing functionality. I have seen people using this and WordPress lately for small personal microcast "channels" which can be subscribed to or syndicated out to other social platforms.

Anchor appears to have some production and publishing tools as well as a distribution platform of sorts. It looks like the material you publish to your own station can be listened to for 24 hours, but then disappears unless you archive it to your account privately. Fortunately you can export your audio with a little bit of gymnastics, but it's not intuitive. This seems more like an ephemeral audio version of snapchat to me.

I'm curious what you're looking for in a minipodcast? I'm considering something shortly myself and have been looking at Anchor as well as Opinion2, Spreaker Studio, audioBoom,, and even others as simple as using my LiveScribe Echo pen to record and then distribute via my Known or WordPress sites.


Internet Archive moves toward ignoring Robots.txt for more commonsensical archival procedure.


It's not often that linkrot goes in the other direction. Hurrah for @upcomingorg!


@kartik_prabhu Webmentions to things other than posts-pages, archives, etc. In this particular case, to my homepage


Or by strange formatting did you mean some of the struck out text on links?

If that's the case, I'll note that I've got a parser on my site that checks for broken links (404s and others) that uses CSS to strike out links which have gone dead. And there were a number on that particular article. I have a master list of them compiled on my back end and usually once a month or so I go through to attempt to redirect them to new locations or to copies if available. In some sense it's a personal reminder to myself of how fragile the web can be.

Perhaps I could do a better/alternate visualization for broken links than striking out the text? Red links? Hover text? Other? Suggestions?


Jeremy, odd that you've noticed, but even odder (or not, given that the average lifespan of a page on the web is about 100 days) that the original has now 404'd.

I've been slowly working on an "I've read this" type of workflow in which I scrape and archive the contents of the things I've read for future searching (as well as for maintaining my own highlights, notes, and other marginalia), though the contents should only be available to me on my back end.

The system is supposed to be set up such that when one visits the page on my site it automatically redirects to the original, and that if the original is gone it redirects to an archived version on the Wayback Machine. The strange formatting is because it's being displayed with my theme rather than the original site's theme. Because it should generally only be available to me, I've not put a lot of effort into modifying the display/UI.

Thanks for "noticing" the bug, I'll see what I can do to fix it shortly, though I'm glad you got to ultimately read the thing you were looking for...


For those interested, @mapkyca has an Machine support plugin for @WithKnown


Kevin Marks's Day 2 Saving webmentioned URLs to


Feature request: Archive internal page/post links to Internet Archive on publish/update · Issue #23 · ManageWP/broken-link-checker

I might suggest the following functionality could fit in well with the plugin's general purpose, particularly since the Internet Archive recommends the plugin. It could also help to "close the loop" in the plugin's overall functionality for helping to maintain data integrity and working links for WordPress sites on the web.

**Suggested Functionality**
When one initially publishes (or possibly updates) a post/page, it would be awesome if all of the URLs referenced on the page as well as that of the page itself were pinged for archiving to the Internet Archive's Wayback Machine at the day and time of their being referenced in the post. (Or perhaps within a day or two of the post so as not to overwhelm's servers with multiple subsequent updates for typos/tweaks which invariably happen post-publication.)

With this functionality, then in the future, if (when) resources change, move, etc. one could use the primary functionality in Broken Link Checker not only to restore a link to a close or reasonable copy of the original, but restore it to the _same_ day it was originally referenced.

As I'm sure you're all too aware, this can be very handy as the average web page has a lifespan of 100 days or less. I can see this being very useful to not only the general public, but particularly for bloggers, linkbloggers, journalists, and academics.

To my knowledge, there are no plugins within the WordPress repository that manage this type of functionality as a standalone plugin, though there is the heavily underused [Post Archival in the Internet Archive]( plugin which essentially adds one's individual post/page permalink URL to the Internet Archive as it's published, though it doesn't include the archival of any links (references) within that same post.

I'm sure that Wayback Machine may provide some additional documentation for implementation, though I suspect the code in the above-referenced pluign is a very good examplar. I also recently came across this snippet: after a recent conference on Saving/Archiving News Sites which may be beneficial as well. It certainly exhibits at least an interest/demand for such a functionality.

I haven't dug into WordPress core, but I'm guessing some of the functionality for parsing URLs within pages/posts for sending Trackbacks/Pingbacks would have a filter or hook for providing all of the URLs in a post/page necessary for such processing and archiving while the_permalink() or get_permalink() gives the last.

Given the popularity of this spectacular plugin, it could also potentially become one of the largest forces for archiving vast swaths of the internet to the Internet Archive, short of WordPress adding such functionality directly into core.