Biomedical and Electrical Engineer with interests in information theory, evolution, genetics, abstract mathematics, microbiology, big history, Indieweb, and the entertainment industry including: finance, distribution, representation
@aaronpk Tell me you're at least planning the conference organizer's nightmare of "accidentally" running over your time and into @barackobama's block, right? ;)
I can see an interesting use case for Noterlive.com at an online Twitter conference like #pressedconf18. When you're done you can cut and paste the content into your own website as an archive.
Ha gRegor! Very funny!
Oddly my install doesn't know how to process wementions to that particular (pseudo or archive) URL, so I didn't get the webmention somehow (or it's hiding really well). Perhaps better to have used a homepage wm instead? I only saw this ultimately as the result of your wiki edit. I still laughed out loud though.
YouTube URL embeds not working · Issue #1 · dshanske/twentysixteen-indieweb https:/
In archive views (/kind/post_kind/) and individual pages, typical raw YouTube URLs in pages that previously converted them into embeds via WordPress now just display them as text (non-clickable) URLs.
This may also extend to other types of WordPress embeds as well.
@Storify @StorifyHelp, how am I supposed to download stories when they don't seem to be available? This one has many pieces but only one is visible and I can't follow the archive instructions because the web interface isn't working. https:/
@kristenhare A growing group of journalists are joining the IndieWeb movement to better own their work and data within their own personal archives as well as using tools like Ben's to archive their work on larger institutional repositories. There's a stub page on the group's wiki dedicated to ideas like this for journalists at https:/
Coincident with these particular sites disappearing, there's now also news today that Peter Thiel may purchase Gawker in a bid to make it disappear from the internet, which makes these tools all the more relevant to the thousands who wrote for that outlet over the past decade.
For journalists and technologists who are deeply committed to these ideas, I'd recommend visiting the Reynolds Journalism Institute. They just finished a two day conference entitled "Dodging the Memory Hole" at the Internet Archive last week focused on saving/archiving digital news in various forms. (https:/
This is an interesting idea: Internet Archive TV News Lab: Introducing Face-O-Matic, experimental Slack alert system tracking Trump & congressional leaders on TV news | Internet Archive Blogs
@MarkGraham, @InternetArchive had previously collaborated with WP Broken Link Checker https:/
Is there someone there who could nudge this suggestion which could increase submissions? #DtMH2017
@machawk1 @LinkArchiver #DtMH2017 I post all of my content to my *own* site first and then syndicate it to Twitter secondarily. All of my primary website posts are additionally backed up via API to @InternetArchive in addition to @LinkArchiver and personal back ups (local and cloud) of my site and occasional downloads of my exported Twitter Archive. #saveallthethings
See also: https:/
As if the upcoming #DtMH17 conference at the Internet Archive needed an advertisement for existing: https:/
@billbennettnz @davewiner I think I mentioned to you that @Chronotope was mulling something over along these lines:
I'm curious if there's a middle ground? The way that @davewiner does his blog with updating hashes throughout the day would be interesting within news distribution, that way the URL changes, but at the same time it doesn't really. Example: http:/
In some sense, these hashes are related to the IndieWeb concept of fragmentions: https:/
Depending on implementation, news sites could offer a tl;dr toggle button that gives a quick multi-graph synopsis. As I recall USA Today and Digiday used to do something like this on longer pieces:
Here's a version of the functionality via the WayBackMachine that still works: https:/
Imagine how powerful a long running story could be with all of these features? Or even snippets of inter-related stories which could be plugged into larger wholes? Eg: The Trump Administration's handling of North Korea seen in fact snippets over time spanning months while pieces of this could be integrated into a larger Trump Administration mega-story going back to January or even the beginning of his campaign. Someone who hasn't been following along could jump back months or years to catch up relatively quickly, but still have access to more context that is often missing from bigger pieces which need to stand on their own relatively.
#journalism #indieweb #fragmentions
Jeremy, congrats on owning your reading! I'd recently seen your note about using reading.am, but I've been on holiday and not had a chance to get back to you.
In general it seems like you've found most of the salient pieces I liked about it. For the record these include:
* I like the idea of "bookmarking" everything I'm reading as I read it. Even for things I don't quite finish, I often will want to know what the thing was or how to easily find it at a later date.
* It has an easy to use desktop bookmarklet that makes the friction of using it negligible. (On mobile I use the ubiquitous sharing icon and use my account's custom email address to email links to my account which is quick enough too.)
* Its RSS feed is useful (as you've discovered), but I've integrated it into my WordPress site using IFTTT.com for porting the data I want over. In my case I typically save the post as a draft and don't publicly publish everything that my lesser followed reading.am account does. Generally once a day I'll look at drafts to add some notes if necessary, or do some follow up reading/research (especially when I've read something interesting via mobile and didn't have the time), and then publish a subsection of the captured reads as public.
I've filed an issue with the developer to see if he'd include the comment data within Reading.am into the RSS feed so that it could be included in the passed data, so that when commenting there, the commentary could also be passed across to my site as well.
While I typically prefer to default to POSSE when I can, this PESOS workflow is generally acceptable to me because it required very little effort and I like having the drafts to determine which I should post publicly/privately as well as for a nudge on potential follow up for some of what I've read.
One other small thing I had done was (via plugin) to have any links on my site auto-post to the WayBackMachine on archive.org as I read/post them that way there's a back up version of what I'd read so that in the future copies are available even if the site goes down at a future date. I suspect you could do this with a simple POST call, an example of which I think is documented in the wiki.
As a subtle tweak you may wish to take a look at https:/
I know you're always saying that you're not a developer, but you've puzzled out a regex filter, implemented it, and posted it to your site for others to benefit. I would submit that you could now proudly wear the title even if you have no intention to do it professionally. Neither of us may be at the level of people like aaronpk or snarfed, but then, really, who is?
I also love that you've got a Webmention form set up, working, and looking great. Congratulations! If you want a small challenge, perhaps you could massage it to create a Grav plugin so others could easily implement it? If you want less challenge (and obligation for support), perhaps submit what you've got as an issue to the Grav Webemention plugin https:/
Congratulations again Mr. Developer!