Skip to main content

Biomedical and Electrical Engineer with interests in information theory, evolution, genetics, abstract mathematics, microbiology, big history, Indieweb, and the entertainment industry including: finance, distribution, representation

boffosocko.com

chrisaldrich

chrisaldrich

+13107510548

chris@boffosocko.com

u/0/+ChrisAldrich1

stream.boffosocko.com

www.boffosockobooks.com

chrisaldrich

pnut.io/@chrisaldrich

mastodon.social/@chrisaldrich

micro.blog/c

 

@kristenhare A growing group of journalists are joining the IndieWeb movement to better own their work and data within their own personal archives as well as using tools like Ben's to archive their work on larger institutional repositories. There's a stub page on the group's wiki dedicated to ideas like this for journalists at https://indieweb.org/Indieweb_for_Journalism

Coincident with these particular sites disappearing, there's now also news today that Peter Thiel may purchase Gawker in a bid to make it disappear from the internet, which makes these tools all the more relevant to the thousands who wrote for that outlet over the past decade.

For journalists and technologists who are deeply committed to these ideas, I'd recommend visiting the Reynolds Journalism Institute. They just finished a two day conference entitled "Dodging the Memory Hole" at the Internet Archive last week focused on saving/archiving digital news in various forms. (https://www.rjionline.org/events/dodging-the-memory-hole-2017) Naturally most of the conference was streamed and is available on YouTube (as well as archived.) Keep your eyes peeled for next year's conference which typically occurs in November.

 

This is an interesting idea: Internet Archive TV News Lab: Introducing Face-O-Matic, experimental Slack alert system tracking Trump & congressional leaders on TV news | Internet Archive Blogs
https://blog.archive.org/2017/07/19/introducing-face-o-matic

 

@jayrosen_nyu, @jeffjarvis and students may appreciate the live stream from Internet Archive: Dodging the Memory Hole 2017 https://www.youtube.com/watch?v=vIHM37FNpL8

Naturally the whole conference will be archived.

 

@MarkGraham, @InternetArchive had previously collaborated with WP Broken Link Checker https://blog.archive.org/2013/10/25/fixing-broken-links/.
Is there someone there who could nudge this suggestion which could increase submissions?
https://github.com/ManageWP/broken-link-checker/issues/23#issuecomment-279230774

 

@realkimhansen, @jimpick should be watching this segment of with Roger Macdonald, Director, Internet Archive Television News Archive
https://www.youtube.com/watch?v=vIHM37FNpL8

 

As an evening aside, anyone looking for a solid live-tweeting tool for tomorrow, I highly recommend Noter Live w/ multiple speakers, threaded tweets, AND allows you to save/archive your stream on your own website after-the-fact.
http://www.noterlive.com

 

@machawk1 @LinkArchiver I post all of my content to my *own* site first and then syndicate it to Twitter secondarily. All of my primary website posts are additionally backed up via API to @InternetArchive in addition to @LinkArchiver and personal back ups (local and cloud) of my site and occasional downloads of my exported Twitter Archive.

See also: https://indieweb.org/archival_copy and https://indieweb.org/Internet_Archive

 

As if the upcoming conference at the Internet Archive needed an advertisement for existing: https://twitter.com/me3dia/status/926197470597705733

 

@billbennettnz @davewiner I think I mentioned to you that @Chronotope was mulling something over along these lines:
https://twitter.com/Chronotope/status/830097158665801728

I'm curious if there's a middle ground? The way that @davewiner does his blog with updating hashes throughout the day would be interesting within news distribution, that way the URL changes, but at the same time it doesn't really. Example: http://scripting.com/2017/08/17.html#a094957 (Naturally the ability to update RSS feeds over time would be useful---as he describes in this particular post--, but it would also depend heavily on how users are subscribing to their news.) In his case, the updates are categorized by day/date rather than topic or category which is what an unfolding story would more likely do in a digital news publication.

In some sense, these hashes are related to the IndieWeb concept of fragmentions: https://indieweb.org/fragmention, though in their original use case, they're meant to highlight pieces within a whole. This doesn't mean they couldn't be bent sideways a little to serve a more news-specific piece that includes a river of updates as a story unfolds--especially since they're supported by most browsers. It would be much easier to syndicate the updates of the originals out to social media locations like Twitter or Facebook this way too. Readers on Twitter, for example, could see and be directed to the latest, but still have easy access to "the rest of the story" as Paul Harvey would say.

Depending on implementation, news sites could offer a tl;dr toggle button that gives a quick multi-graph synopsis. As I recall USA Today and Digiday used to do something like this on longer pieces:
https://twitter.com/ChrisAldrich/status/632063182811467776
Here's a version of the functionality via the WayBackMachine that still works: https://web.archive.org/web/20150818075138/http://digiday.com:80/publishers/mics-social-approach-distributing-first-obama-interview/

Imagine how powerful a long running story could be with all of these features? Or even snippets of inter-related stories which could be plugged into larger wholes? Eg: The Trump Administration's handling of North Korea seen in fact snippets over time spanning months while pieces of this could be integrated into a larger Trump Administration mega-story going back to January or even the beginning of his campaign. Someone who hasn't been following along could jump back months or years to catch up relatively quickly, but still have access to more context that is often missing from bigger pieces which need to stand on their own relatively.




 

Jeremy, congrats on owning your reading! I'd recently seen your note about using reading.am, but I've been on holiday and not had a chance to get back to you.

In general it seems like you've found most of the salient pieces I liked about it. For the record these include:
* I like the idea of "bookmarking" everything I'm reading as I read it. Even for things I don't quite finish, I often will want to know what the thing was or how to easily find it at a later date.
* It has an easy to use desktop bookmarklet that makes the friction of using it negligible. (On mobile I use the ubiquitous sharing icon and use my account's custom email address to email links to my account which is quick enough too.)
* Its RSS feed is useful (as you've discovered), but I've integrated it into my WordPress site using IFTTT.com for porting the data I want over. In my case I typically save the post as a draft and don't publicly publish everything that my lesser followed reading.am account does. Generally once a day I'll look at drafts to add some notes if necessary, or do some follow up reading/research (especially when I've read something interesting via mobile and didn't have the time), and then publish a subsection of the captured reads as public.

I've filed an issue with the developer to see if he'd include the comment data within Reading.am into the RSS feed so that it could be included in the passed data, so that when commenting there, the commentary could also be passed across to my site as well.

While I typically prefer to default to POSSE when I can, this PESOS workflow is generally acceptable to me because it required very little effort and I like having the drafts to determine which I should post publicly/privately as well as for a nudge on potential follow up for some of what I've read.

One other small thing I had done was (via plugin) to have any links on my site auto-post to the WayBackMachine on archive.org as I read/post them that way there's a back up version of what I'd read so that in the future copies are available even if the site goes down at a future date. I suspect you could do this with a simple POST call, an example of which I think is documented in the wiki.

As a subtle tweak you may wish to take a look at https://www.reading.am/p/4MDd/https://www.wired.com/2014/08/handle_change/. I noticed that you bookmarked something as read a second time having clicked through via a reading.am link. This causes reading.am to mark the second one as "Jeremy Cherfas is reading this because of Jeremy Cherfas" which means the "because of Jeremy Cherfas" manages to sneak into your RSS feed in the title. I suspect this wouldn't happen often, so you could probably ignore it, but you could throw it into your Regex filter to trim it out when it does happen. (When you click on reading.am links, they process to show that you're reading something as a result of someone else having posted it, which could show some interesting network effects though the reading.am network is relatively small.)

I know you're always saying that you're not a developer, but you've puzzled out a regex filter, implemented it, and posted it to your site for others to benefit. I would submit that you could now proudly wear the title even if you have no intention to do it professionally. Neither of us may be at the level of people like aaronpk or snarfed, but then, really, who is?

I also love that you've got a Webmention form set up, working, and looking great. Congratulations! If you want a small challenge, perhaps you could massage it to create a Grav plugin so others could easily implement it? If you want less challenge (and obligation for support), perhaps submit what you've got as an issue to the Grav Webemention plugin https://github.com/Perlkonig/grav-plugin-webmention and see if they'd add it into the bigger plugin where it would also make sense having it. (They may also default to having it use their own webmention implementation instead of the heroku version.) If nothing else, consider linking/documenting it on the wiki page for Grav where others may find it more easily in the future.

Congratulations again Mr. Developer!

 

That last sentence defining the blockchain is fantastic.

If you hadn't heard of it yet, I attended a conference last year at UCLA entitled Dodging the Memory Hole, which I suspect is right up your alley. I know they're gearing up for another installment later this year at the Internet Archive in San Francisco. I suspect you'll find lots of friends there, and they're still accepting talks. https://www.rjionline.org/events/dodging-the-memory-hole-2017

 

Rik, with the right license you can host audio on archive.org for free and just point/hotlink your audio posts to that.