Skip to main content

Biomedical and Electrical Engineer with interests in information theory, evolution, genetics, abstract mathematics, microbiology, big history, Indieweb, and the entertainment industry including: finance, distribution, representation

boffosocko.com

chrisaldrich

chrisaldrich

+13107510548

chris@boffosocko.com

u/0/+ChrisAldrich1

stream.boffosocko.com

www.boffosockobooks.com

chrisaldrich

pnut.io/@chrisaldrich

mastodon.social/@chrisaldrich

micro.blog/c

 

@billbennettnz @davewiner I think I mentioned to you that @Chronotope was mulling something over along these lines:
https://twitter.com/Chronotope/status/830097158665801728

I'm curious if there's a middle ground? The way that @davewiner does his blog with updating hashes throughout the day would be interesting within news distribution, that way the URL changes, but at the same time it doesn't really. Example: http://scripting.com/2017/08/17.html#a094957 (Naturally the ability to update RSS feeds over time would be useful---as he describes in this particular post--, but it would also depend heavily on how users are subscribing to their news.) In his case, the updates are categorized by day/date rather than topic or category which is what an unfolding story would more likely do in a digital news publication.

In some sense, these hashes are related to the IndieWeb concept of fragmentions: https://indieweb.org/fragmention, though in their original use case, they're meant to highlight pieces within a whole. This doesn't mean they couldn't be bent sideways a little to serve a more news-specific piece that includes a river of updates as a story unfolds--especially since they're supported by most browsers. It would be much easier to syndicate the updates of the originals out to social media locations like Twitter or Facebook this way too. Readers on Twitter, for example, could see and be directed to the latest, but still have easy access to "the rest of the story" as Paul Harvey would say.

Depending on implementation, news sites could offer a tl;dr toggle button that gives a quick multi-graph synopsis. As I recall USA Today and Digiday used to do something like this on longer pieces:
https://twitter.com/ChrisAldrich/status/632063182811467776
Here's a version of the functionality via the WayBackMachine that still works: https://web.archive.org/web/20150818075138/http://digiday.com:80/publishers/mics-social-approach-distributing-first-obama-interview/

Imagine how powerful a long running story could be with all of these features? Or even snippets of inter-related stories which could be plugged into larger wholes? Eg: The Trump Administration's handling of North Korea seen in fact snippets over time spanning months while pieces of this could be integrated into a larger Trump Administration mega-story going back to January or even the beginning of his campaign. Someone who hasn't been following along could jump back months or years to catch up relatively quickly, but still have access to more context that is often missing from bigger pieces which need to stand on their own relatively.




 

Jeremy, congrats on owning your reading! I'd recently seen your note about using reading.am, but I've been on holiday and not had a chance to get back to you.

In general it seems like you've found most of the salient pieces I liked about it. For the record these include:
* I like the idea of "bookmarking" everything I'm reading as I read it. Even for things I don't quite finish, I often will want to know what the thing was or how to easily find it at a later date.
* It has an easy to use desktop bookmarklet that makes the friction of using it negligible. (On mobile I use the ubiquitous sharing icon and use my account's custom email address to email links to my account which is quick enough too.)
* Its RSS feed is useful (as you've discovered), but I've integrated it into my WordPress site using IFTTT.com for porting the data I want over. In my case I typically save the post as a draft and don't publicly publish everything that my lesser followed reading.am account does. Generally once a day I'll look at drafts to add some notes if necessary, or do some follow up reading/research (especially when I've read something interesting via mobile and didn't have the time), and then publish a subsection of the captured reads as public.

I've filed an issue with the developer to see if he'd include the comment data within Reading.am into the RSS feed so that it could be included in the passed data, so that when commenting there, the commentary could also be passed across to my site as well.

While I typically prefer to default to POSSE when I can, this PESOS workflow is generally acceptable to me because it required very little effort and I like having the drafts to determine which I should post publicly/privately as well as for a nudge on potential follow up for some of what I've read.

One other small thing I had done was (via plugin) to have any links on my site auto-post to the WayBackMachine on archive.org as I read/post them that way there's a back up version of what I'd read so that in the future copies are available even if the site goes down at a future date. I suspect you could do this with a simple POST call, an example of which I think is documented in the wiki.

As a subtle tweak you may wish to take a look at https://www.reading.am/p/4MDd/https://www.wired.com/2014/08/handle_change/. I noticed that you bookmarked something as read a second time having clicked through via a reading.am link. This causes reading.am to mark the second one as "Jeremy Cherfas is reading this because of Jeremy Cherfas" which means the "because of Jeremy Cherfas" manages to sneak into your RSS feed in the title. I suspect this wouldn't happen often, so you could probably ignore it, but you could throw it into your Regex filter to trim it out when it does happen. (When you click on reading.am links, they process to show that you're reading something as a result of someone else having posted it, which could show some interesting network effects though the reading.am network is relatively small.)

I know you're always saying that you're not a developer, but you've puzzled out a regex filter, implemented it, and posted it to your site for others to benefit. I would submit that you could now proudly wear the title even if you have no intention to do it professionally. Neither of us may be at the level of people like aaronpk or snarfed, but then, really, who is?

I also love that you've got a Webmention form set up, working, and looking great. Congratulations! If you want a small challenge, perhaps you could massage it to create a Grav plugin so others could easily implement it? If you want less challenge (and obligation for support), perhaps submit what you've got as an issue to the Grav Webemention plugin https://github.com/Perlkonig/grav-plugin-webmention and see if they'd add it into the bigger plugin where it would also make sense having it. (They may also default to having it use their own webmention implementation instead of the heroku version.) If nothing else, consider linking/documenting it on the wiki page for Grav where others may find it more easily in the future.

Congratulations again Mr. Developer!

 

That last sentence defining the blockchain is fantastic.

If you hadn't heard of it yet, I attended a conference last year at UCLA entitled Dodging the Memory Hole, which I suspect is right up your alley. I know they're gearing up for another installment later this year at the Internet Archive in San Francisco. I suspect you'll find lots of friends there, and they're still accepting talks. https://www.rjionline.org/events/dodging-the-memory-hole-2017

 

Rik, with the right license you can host audio on archive.org for free and just point/hotlink your audio posts to that.

 
 
 
 

John, I has many of the same thoughts you did when I was looking at Anchor.fm again as a possible microcast option last week. The relative ephemeral nature of it made me think "it's snapchat for audio". I suspect it may be easier for them not to have to host/stream old audio, though they do keep users' streams if you mark them to be saved. I also have a feeling it's much harder for users to build a native audience on such a platform unless they're bringing an audience with them. I was pained by how much hunting and searching it took to find an option to download and save my recordings to archive personally or to post on my own site.

 

@Mercy They seem like completely different products to me.
Known is a CMS which allows you to own your data, then publish and syndicate it to multiple platforms (including uploading to Anchor). It doesn't have any audio creation or editing functionality. I have seen people using this and WordPress lately for small personal microcast "channels" which can be subscribed to or syndicated out to other social platforms.

Anchor appears to have some production and publishing tools as well as a distribution platform of sorts. It looks like the material you publish to your own station can be listened to for 24 hours, but then disappears unless you archive it to your account privately. Fortunately you can export your audio with a little bit of gymnastics, but it's not intuitive. This seems more like an ephemeral audio version of snapchat to me.

I'm curious what you're looking for in a minipodcast? I'm considering something shortly myself and have been looking at Anchor as well as Opinion2, Spreaker Studio, audioBoom, tryca.st, and even others as simple as using my LiveScribe Echo pen to record and then distribute via my Known or WordPress sites.

 

Internet Archive moves toward ignoring Robots.txt for more commonsensical archival procedure.
http://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives

 

It's not often that linkrot goes in the other direction. Hurrah for @upcomingorg!
https://twitter.com/upcomingorg/status/737867920521502720

 

@kartik_prabhu Webmentions to things other than posts-pages, archives, etc. In this particular case, to my homepage https://github.com/pfefferle/wordpress-webmention#how-can-i-handle-webmentions-to-my-homepage-or-archive-pages